Case Study

AI Governance: A three-level approach to deploy a new AI model

With this comprehensive three-level approach (Business, AI Governance, and Audit), organizations can systematically implement AI Act compliance, ensuring their AI systems align with both regulatory requirements and business objectives..

Challenge

Many companies are seeking to comply with the AI Act but face uncertainties regarding the requirements and necessary controls. At Beltug Privacy Council, we conducted a survey among more than 160 industry professionals to identify the main concerns and adapt its approach accordingly.

Companies are primarily concerned with:

  • What controls should be applied and how to implement them?
  • How to ensure that the model’s results comply with the company’s data and company policies?

Solution Proposed by Data Trust Associates

Before building an AI model, it’s crucial to establish a solid foundation across three levels: business, AI governance, and audits. Deploying an AI model demands thorough preparation for optimal performance and compliance.

AI governance plays a vital role in the AI lifecycle. It ensures that the data used for training aligns with business objectives and complies with company policies. It helps maintain transparency, accountability, and ethical AI practices, reduces risks, and ensures the model operates reliably within regulatory frameworks.

DTA assists companies in implementing effective AI governance before the deployment of a model by following a three-level approach:

  • Business Level
    • Defining the objectives of the AI model.
    • Preparing a use case design, serving as the foundation for subsequent steps.
    • Go/no-go decision on deploying the AI model based on regulatory and business requirements.
  • AI Governance Level
    • Integrated AI governance with Data Governance & Data Quality to obtain a 360 view of your AI model risks
    • Reviewing the use case design
    • Implementing the ISO 42001 standard, ensuring compliance with best practices in artificial intelligence management.
    • Assessing risks and classifying the model according to AI Act requirements.
    • Defining and validating the necessary controls before final approval.
  • Audit Level
    • Planning and documenting the defined controls.
    • Implementing the controls validated by AI governance.
    • Auditing and evaluating controls to ensure their effectiveness.

Impact

  • Accelerated compliance through a structured approach.
  • Risk reduction by ensuring that all critical steps are covered.
  • Alignment with international regulations through the application of ISO 42001.
  • Informed decision-making on the viability of AI projects before deployment.

This approach ensures a robust and compliant framework for companies, allowing them to focus on innovation and effectively use AI in their operations.

Meet Rudi, Practice Lead GDPR & Compliance @ Data Trust Associates

"Discover the key ingredients to succeed with this initiative and drive real impact."

Take Contact