The EU AI Act is the world’s first comprehensive legal framework governing AI across use cases. Following a lengthy consultation process since it was first proposed in April 2021 – which saw members states and Union institutions propose comprehensive amendments – a political agreement was reached in December of 2023. The text based on this agreement is now going through the final stages of the EU law-making procedure, and was approved by the EU Parliament committees on 13 February and will be voted on by the Parliament plenary in March. The text preserves the risk-based approach for AI systems, where requirements are proportionate to the level of risk posed by a system, while introducing another risk-based classification for general purpose AI (“GPAI”) models.
This guide serves as a starting point for organizations seeking to determine the level of regulatory risk their systems pose in the EU.
The objective of the Act is to protect fundamental rights and prevent harm by regulating AI use within the European Union. This includes not only EU-based entities but also any organization that employs AI in interactions with EU residents, due to the Act’s extraterritorial reach.
The Act categorizes AI systems and delineates different responsibilities for different parties based on the system's risk level. It's crucial for all participants in the AI system's lifecycle to understand their role and the risk classification of their system. Those with obligations under the Act are providers, deployers, importers, and distributors of AI systems, as well as the authorized representatives of providers located outside the EU. Providers—those who develop, train, or market AI systems—are subject to the most comprehensive obligations.
Identification of the role of an entity (operator) and the classification of the AI system in question are crucial steps to prepare for the Act. Naturally, operators must first create a comprehensive inventory of their AI assets. This allows them to determine whether they possess an AI system or an AI model, leading to a bifurcated assessment process: one for AI systems and another for GPAI models.
For AI models, the initial step is to determine if they qualify as GPAI models. If this is the case, a further assessment is needed to establish if they possess high-impact capabilities, which would classify them as GPAI models with systemic risk.
For AI systems, two simultaneous evaluations are necessary. The first is to ascertain the system's risk category. The second is to determine if the system's operations invoke additional transparency requirements.
In instances where an AI system incorporates a GPAI model, the system is designated as a GPAI system under the Act. In such cases, the system is subject to both the requirements associated with its risk level and the GPAI-model assessments independently.
AI systems under the Act are assigned to one of three distinct risk categories:
Notably, each system is exclusively categorized into only one of these risk categories. For a proper classification, the evaluation must be started from the question of whether the system is prohibited, taking a top-down approach. If is it not, then this must be followed by an assessment of whether the system is high-risk.
When evaluating the risk level of an AI system, the first step is to determine if it falls under any of the prohibited categories outlined in Article 5 of the EU AI Act. This article specifies both absolute prohibitions and certain exceptions. Key prohibitions include:
These prohibitions aim to safeguard personal autonomy, prevent unfair discrimination, and uphold public safety, while also protecting privacy and fundamental human rights.
If an AI system is not prohibited, the subsequent step is to evaluate whether it is a high-risk system as per Article 6 of the Act. The Act outlines three principal scenarios where an AI system may be considered high-risk:
An important update in the latest version of the Act is the provision that systems used in these sectors are automatically classified as high-risk, but the provider has the opportunity to argue that their system should not be deemed high-risk if they can substantiate that it does not pose a significant risk to people's health, safety, or fundamental rights.
There are seven key design-related requirements for the high-risk AI systems under the EU AI Act:
However, these requirements must not be confused with obligations imposed on operators. In fact, ensuring that their AI system meet these requirements is only one of the obligations of providers.
AI systems that do not fall into the prohibited or high-risk categories are considered to have minimal risks. These systems do not have mandatory requirements but are encouraged to adhere to voluntary codes of conduct.
Within the framework of the EU AI Act, there exists a classification for AI systems often termed "limited risk AI systems". These systems do not fit into the exclusive risk categories but are recognized for the specific risks they pose during user interactions. Article 52 of the Act sets forth a series of transparency obligations for providers or users of certain AI systems to mitigate these risks:
These directives are applicable across the spectrum of AI systems, whether they are considered high-risk or low risk. Hence, compliance with these obligations is a separate process and needs to be evaluated alongside the risk category determination.
General-purpose AI (GPAI) models, previously referred to as foundation models in the Parliament's negotiations, have been given a dedicated chapter in the most recent version of the Act.
The provisions for GPAI models come into play specifically when such a model is part of the AI system under consideration. Providers must first evaluate whether their GPAI model carries systemic risk, characterized by high-impact capabilities. If identified as such, the model is subject to additional and more rigorous technical requirements.
The Act imposes obligations primarily on the providers of GPAI models. These obligations are as follows:
In the framework of the Act, a specific and more stringent regulatory regime is applied to certain GPAI models. This is due to their expansive capabilities and the consequential potential impact they may have. To ascertain whether a GPAI model constitutes a GPAI model with systemic risk, an evaluation of methodologies, technical indicators, and benchmarks is conducted.
A key aspect of this determination process is a presumption established by the Act: GPAI models that necessitate a cumulative computational power exceeding 1025 FLOPs (floating-point operations) are assumed to have high-impact capabilities. This benchmark serves as a heuristic for identifying models with significant potential effects, thereby subjecting them to the Act's stricter regime.
In addition to obligations mentioned above, providers of GPAI models with systemic risk must do the following:
The Act provides a set of hefty penalties for non-compliance with its provisions. The amount of the fine changes depending on the role of the infringer and the seriousness of the infringement.
The best way to ensure that your systems are ready for the Act to avoid penalties is to take steps early. No matter the stage of development of the system, the classification of the AI systems is of paramount importance. Then a risk management framework can be developed and implemented to prevent potential future harm. Getting ahead of this regulation will help you to embrace your AI with confidence.
Schedule a call to find out more about how Holistic AI’s can help you on your journey for the AI Act preparedness.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts