Key takeaways:
The EU AI Act is a comprehensive legal framework governing AI available on the EU market. As a product safety legislation, its purpose is to protect the fundamental rights, health, and safety of EU citizens against AI through its risk-based approach. Here, AI practices that conflict with European values and pose too high a threat to fundamental rights, health, and safety are prohibited, while systems posing a high risk must comply with stringent obligations.
Under the EU AI Act’s risk-based approach, obligations for AI systems are proportionate to the level of risk they present, taking into account factors such as the design and intended use. Based on risk level, the EU AI Act specifies corresponding requirements for documentation, auditing, transparency, and obligations, where there are three distinct levels of risk:
In addition to these three distinct risk levels, some systems may have limited risk or transparency risk AI systems in addition to their risk level above, providing the system is not prohibited. Systems with transparency risk are those that interact with end-users. Users of these systems must be informed that they are interacting with an AI system, that an AI system will be used to infer their characteristics or emotions, or that the content they are interacting with has been generated using AI. Examples are chatbots and deepfakes.
The Act also provides a separate risk-based classification for general-purpose AI (GPAI) models and associated obligations.
Annex III of the EU AI Act defines eight high-risk use cases:
When an AI system is within one of these use cases, it is automatically considered high-risk. However, if the system does not pose a significant risk of harm to the health, safety or fundamental rights, then it will not be considered high-risk. This include scenarios where an AI system is intended to perform a narrow procedural task or to improve the result of a previously completed human activity.
In addition, an AI system is considered high risk if it is intended to be used as a safety component of a product or is a product covered by the list of Union harmonization legislation in Annex I and is required to undergo a third-party conformity assessment related to health and safety risks.
Operators of high-risk systems are imposed with different obligations depending on their role. The most stringent obligations are for providers of high-risk AI systems, Among these obligations, the most crucial one is to ensure that a high-risk AI system meets the following technical requirements:
Providers must also comply with operational obligations, including conducting a conformity assessment, drawing up a declaration of conformity, and creating a post-market monitoring system. To ensure compliance, conformity assessments must be carried out. The system must then be registered in the EU database and should bear the CE marking to indicate their conformity before it can be placed on the market.
On the other hand, obligations for developers include conducting a fundamental rights impact assessment to determine the potential effects as well as impacts of the deployment of a high-risk AI system.
Obligations for general-purpose AI models include drawing up technical documentation and providing up a Union copyright policy.
In addition, some models may be considered as having high-impact capabilities that results in systemic risk if more than 1025 FLOP of computing power was used during their training. For these models, there are additional obligations such as conducting a model evaluation.
Article 5 prohibits the following practices that are deemed to pose too high of a risk:
Whether a given AI system is prohibited must be assessed on a case-by-case basis. The European Commission published guidance on 4 February 2025 on prohibited AI practices that can be used to guide this evaluation.
The EU AI Act imposes obligations on a number of parties such as importers, distributers, deployers, and operators, although the Act primarily applies to the providers of AI systems and GPAI models. The EU AI Act governs the EU market, meaning that entities placing their systems or models on the market or putting them into service within the EU must comply regardless of whether they are physically based in the EU.
The Act was published in the Official Journal of the EU on 12 July 2024, and entered into force as a whole starting from 1 August. However, it has a gradual application timeline:
Non-compliance comes with steep penalties of up to €35 million or 7% of global turnover, whichever is higher, for the use of prohibited systems. However, the severity of fines will depend on the level of transgression, with lower penalties of up to €7.5 million or to 1% of turnover for supplying incorrect, incomplete, or misleading information, for example.
The EU AI Act does not apply to certain AI systems and models, including:
The EU AI Act is a landmark piece of regulation and seeks to become the global gold standard for AI regulation with its sector-agnostic approach, which will help to ensure that there are consistent standards across the board to regulate AI. The rules impose obligations that are proportionate with the risk of the system, ensuring that potentially harmful systems are not deployed in the EU, while those associated with little or no risk can be used freely. Those that pose a high-risk will be constrained accordingly, while not preventing opportunities for innovation and development.
The Act will have far-reaching implications, affecting entities that interact with the EU market, even if they are based outside of the EU. There are considerable obligations to comply with, particularly for high-risk AI systems, and navigating the text is no small feat. Getting prepared early is the best way to ensure compliance and that obligations are met. To see how Holistic AI can help you ensure compliance, schedule a demo today.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts