The EU aims to lead the world in Artificial Intelligence (AI) regulation with its proposal for Harmonised Rules on Artificial Intelligence (EU AI Act). It seeks to lay out a normative framework so that risks of AI systems are managed and mitigated to build trust in AI systems used in the EU and protect the fundamental rights of EU citizens.
In doing so, the EU AI Act introduces a risk-based approach for AI systems, which defines three levels of risk for AI systems: minimal, high, and unacceptable. The Act classifies general purpose AI (GPAI) models according to their systemic impact and also subjects AI systems interacting with users to a set of transparency obligations. Penalties for non-compliance follow a tiered system, with more severe violations of obligations and requirements carrying heftier penalties.
The EU AI Act was first proposed in 2021, and has since undergone an extended consultation process filled with further amendments. Some of the highlights of this process include:
This process has seen changes made throughout the text, including the obligations of implicated entities and penalties for violating such obligations. The text is now at the final stages of the EU lawmaking procedure and pending approval by the Parliament.
At Holistic AI, we’ve followed and presented guidance throughout the stages of the EU AI Act’s drafting, and are presenting this post to outline penalties of the AI Act under the most recent version of the text.
Key Takeaways
The initial proposal for the EU AI Act by the Commission introduced a three-tier approach for the penalties, a structure which was maintained under the Council's General Approach. However, this was later modified into a four-tiered model in the Parliament's position.
With this said, the latest draft by the Council of the EU reintroduced a three-tier approach to penalties under Article 71, some of which surpass the hefty fines of GDPR, which are a maximum of €20,000,000.
Broadly, penalties under the EU AI Act target three distinct parties: operators of AI systems, providers of general purpose AI models, and Union institutions, agencies, and bodies. All three tiers apply to operators, while only the bottom tier applies to providers of general purpose AI systems. Union bodies have their own penalty systems.
The heftiest fines are given for using systems (or placing systems on the market) prohibited under the AI Act due to the unacceptable level of risk that they pose. These instances are subject to fines of up to €35,000,000 or up to 7% of annual worldwide turnover for companies. This surpasses the penalties under GDPR, therefore imposing some of the heftiest penalties for non-compliance in the EU.
These penalties are incurred for using any of the following systems in the EU:
The second highest fines are set forth for non-compliance with specific obligations for providers, representatives, importers, distributors, deployers, notified bodies, and users. Non-compliance with the relevant provisions is subject to fines of up to €15,000,000 or up to 3% of annual worldwide turnover for companies.
These penalties are incurred by not meeting the following provisions on obligations:
The following obligations apply to providers of High-Risk AI systems (HRAI):
An “authorized representative” is defined as:
“any natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by [the EU AI Act].”
In short, authorized representatives must act in accordance with the mandate received from the provider. This mandate must empower the representative to do the following:
An “importer” is defined as
“any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union.”
In short, importers must ensure that the HRAI complies with the regulation. Most importantly, they shall verify that the conformity assessment, technical documentation, CE conformity marking, and established authorized representative are in line with the EU AI Act.
A “distributor” is defined as
“any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.”
Similar to importers, distributors also must verify the conformity of the HRAIs to the EU AI Act and cooperate with the competent authorities.
A “deployer” is an entity using an AI system under its authority, except where the AI system is used in the course of a personal, non-professional activity.
Deployers of the HRAIs primarily have the following obligations:
There are additional specific obligations for deployers that are financial institutions.
6. Requirements and obligations of notified bodies under Articles 33, 34(1), 34(3), 34(4), 34a
A notified body is a conformity assessment entity that has been designated in accordance with the AI Act and other relevant Union harmonization legislation.
Such bodies must be organizationally capable, have adequate personnel, and ensure confidence in conformity assessment. They must ensure the highest degree of professional competence and impartiality while carrying out tasks and shall be economically independent of the providers of HRAIs and other operators.
If an AI system is designed and deployed to interact with natural persons, providers and users must inform natural persons in a clear and distinguishable manner that they are interacting with an AI system unless this is obvious from the point of view of a reasonable natural person who is reasonably well-informed, observant, and circumspect. Systems used for biometric categorization, emotion recognition, and deep fake generation also must disclose this to natural persons.
Failure to supply the correct or incomplete information is a violation of Article 23 of the Regulation, which requires cooperation with component authorities. Providers of HRAIs shall, upon request by a competent national authority, provide authorities with all the information and documentation necessary to demonstrate the conformity of the HRAI with the requirements set out.
Replying with incorrect, incomplete, or misleading information to a request of national authorities or notified bodies is subject to fines up to €7,500,000 or 1% of the total worldwide turnover for companies.
In the case of SMEs, including start-ups, each fine in these three Tiers shall be up to the lower of the percentages or amount. On the other hand, offenders who are not SMEs would have to pay the higher of the two. For instance, if 3% of annual turnover was greater than €15,000,000, then businesses would pay the 3% while SMEs would pay €15,000,000. On the other hand, if 3% of annual turnover was less than €15,000,000, then SMEs would pay 3% while businesses would pay €15,000,000.
A general purpose AI (GPAI) model is defined as:
“an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.”
This does not cover AI models that are used before release on the market for research, development and prototyping activities.
Pursuant to Article 72a, the Commission may impose on providers of GPAIs fines of up to 3% of its total worldwide turnover in the preceding financial year or 15 million EUR, whichever is higher, if it finds that the provider of a GPAI intentionally or negligently:
According to Article 72, the European Data Protection Supervisor can also impose administrative fines on Union agencies, bodies, and institutions. Fines could be up to €1,500,000 for non-compliance with the prohibitions of the Act and €750,000 for non-compliance with obligations other than those laid down in Article 5.
The general principle of the AI Act is that penalties shall be effective, dissuasive, and proportionate to the type of offense, previous actions, and profile of the offender. As such, the EU AI Act acknowledges that each case is individual and designates the fines as a maximum threshold, although lower penalties can be issued depending on the severity of the offense. Factors that may be considered when determining penalties include:
The EU AI Act also emphasizes the proportionality approach for SMEs and start-ups, who receive lower penalties guided by their size, interest, and economic viability.
There is no union-wide central authority for imposing fines on AI operators. As the Member States must implement the provisions of infringements into the national law, it depends on the national legal system of the Member States whether to impose fines by competent courts or other bodies. Recital (84) also points out that the implementation of penalties is in respect of the ne bis in idem principle, which means no defendant can be sanctioned twice for the same offense. Member States should consider the margins and criteria set out in the Act.
On the other hand, for the providers of GPAI models and for the Union bodies, the fines are imposed by the Commission and the European Data Protection Supervisor, respectively.
The best way to ensure that your systems are in compliance with the Act to avoid penalties is to take steps early. No matter the stage of development of the system, a risk management framework can be developed and implemented to prevent potential future harm. Getting ahead of this regulation will help you to embrace your AI with confidence. Schedule a call to find out more about how Holistic AI’s software platform and team of experts can help you manage the risks of your AI.
Last updated 14 February 2024.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts