The first comprehensive AI Act is here. It’s time to prepare.

The first comprehensive AI Act is here. It’s time to prepare.

Last month, the EU Parliament approved the world’s first comprehensive set of rules regulating the application of Artificial Intelligence. While the Act still requires endorsement from the European Council, it is a major development—the world’s first set of rules to govern AI.

What do we need to know about the new AI Act, and what companies should do to prepare?

Risk-based approach

The EU’s AI regulatory framework is rooted in a “risk-based” approach—the riskier the AI application, the stricter the rules and compliance requirements.

The rules are set around the four main risk categories AI systems might represent:

  • Unacceptable risk;
  • High risk;
  • General Purpose AI Models;
  • Minimal Risk;

AI systems posing unacceptable risks to the fundamental rights of EU citizens include:

  • Social scoring systems;
  • Biometric categorisation systems based on sensitive characteristics;
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • Emotion recognition in the workplace and schools, and other AI applications capable of manipulating human behaviour;
  • Predictive policing based solely on profiling an individual or assessing their personality traits and characteristics;

AI systems posing high risk:

  • AI systems intended to be used as a safety component of a product are required to undergo a third-party conformity assessment;
  • AI systems that fall within a list of presumed high-risk uses (biometrics, education, etc.);

And then there are general purpose AI models - large-scale GenAI models and systems representing minimal risk. Risks associated with these categories are to be mitigated through transparency.

Ground Rules

The AI Act’s regulatory principles centre around protecting fundamental rights, risk transparency and disclosures, governance, and human oversight of AI model operations.

AI Systems presenting a clear threat to fundamental rights, i.e., those falling within the unacceptable risk category, will be banned. There is a specific limited exception for the use of biometric systems by law enforcement (subject to additional safeguards).

Systems classified as high risk due to potential harm to health and safety, fundamental rights, and the rule of law will be subject to heightened regulatory scrutiny and oversight. These include an obligation to conduct Fundamental rights impact assessments and conformity assessments (before they can be placed on the EU market) and subject those systems to heightened data governance, transparency, cybersecurity, risk and quality management, and human oversight requirements, among others.

General purpose AI models, which include large-scale Gen AI models, on the other hand, are primarily regulated through transparency and disclosures, including detailed technical documentation and the content used to train those models. There is an exception for free, open-source AI models.

The “What”, “When” and the “Who” of the new law

The complete set of regulations will be applicable 24 months after its entry into force (by mid-2026), with a phased-in implementation approach:

  • Prohibitions on AI systems posing “unacceptable risks” will go live six months after the entry into force;
  • Code of practices – 9 months of entry into application;
  • Regulations of Gen AI will come into force in 12 months;

Who will be subject to the new Law?

It’s not only EU businesses that should care. The AI Act, to much extent, has an extra-territorial application and covers:

  • Businesses operating in the EU;
  • Non-EU businesses providing services or processing data to EU citizens or otherwise carrying out AI-related activities that involve EU users or data.

Enforcement

The new AI Act envisages hefty fines for non-compliance. The objective is to signal how serious the EU is about compliance and that the fine is not a “free out of jail” card. If we judge by how the EU enforces GDPR and the Digital Markets Act, there would be no hesitation in applying those.

Penalty ranges:

  • For infringement of the AI Act obligations, the penalty could be up to EUR 15 mln or 3% of the total WW turnover, whichever is higher;
  • For violations involving prohibited AI practices (banned AI Systems), the penalty could be up to EUR 35 mln, or 7% of the total WW turnover, whichever is higher;
  • Where organisations supply incorrect, incomplete or misleading information in response to a Request for Information from a designated body, the fine could be up to EUR 7.5 mln, or 1% of the total WW turnover, whichever is higher;

The imposition of the fine (and its amount) will depend on the following:

  • The nature, gravity and duration of the infringement and its consequences;
  • Prior conduct and penalties;
  • The size, annual turnover, and market share of the AI operator;
  • Any aggravating or mitigating factors, financial benefits gained, or losses avoided;
  • Degree of cooperation in remedying infringement and mitigating adverse effects;
  • Whether operator self-declared infringement;
  • The intentional or negligent character of infringement.

What companies should do to prepare?

While there is still time before the new Act will go into force and be applicable, it’s time for companies using or planning to use AI systems to prepare.  

  • Take a holistic approach to your compliance obligations routed into risk management. While the EU is the first jurisdiction to adopt AI governance rules, others will follow shortly. No company wants to be in a position where they have siloed compliance and duplicated disjoined processes, which easily leads to gaps;
  • Your AI governance strategy is your starting point. It needs to align with your organisation's objectives and strategic goals and management of personal and non-personal data assets (compliance with privacy and data protection rules is essential);
  • Assess risks in line with the AI Act taxonomy and approach. Understand where your current or proposed AI system is likely to fall. This will frame your implementation and compliance requirements; 
  • Review and map your processes and assess their impact and potential level of compliance with the new law. Where gaps are identified, devise a remedial plan, assign responsibilities and track implementation;
  • Implement policy documentation framework and ensure relevant personnel understand the requirements and restrictions (communication, training and awareness), assess your proposed third party AI model developers and external resources to ensure compliance obligations are duly embedded into the contractual framework;
  • Establish the foundation for ongoing compliance and governance activities, determine the resources you need and make a plan.  

 See you next Saturday!

  


If you are embarking on your own compliance transformational journey and need help designing and enhancing your compliance program, get in touch with us. We are here to help!

 

Comhla Intelligent Compliance

At Comhla, we are driven by a mission to revolutionise the way organisations approach compliance and misconduct prevention. By leveraging our in-depth governance, compliance and internal control expertise, actionable data insights and cutting-edge applied research in organisational science, we help our customers build effective regulatory and compliance management to safeguard their license to operate, protect the bottom line and enhance reputation as responsible businesses.

Follow us on LinkedIn: https://www.linkedin.com/company/comhlaic 

Learn More https://comhla.co

We aim to publish every Saturday.  The information provided in this newsletter is not intended to and does not render legal, accounting, tax, or other professional advice or services.

Subscribe to Breaking the Mould

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe