What is the EU AI Act?

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Ayesha Gulley
Policy Product Manager at Holistic AI
Published on
February 20, 2025
last updated on
February 23, 2025
share this
What is the EU AI Act?

Key takeaways:

  • The EU AI Act governs AI available on the EU market with its risk-based approach
  • The most stringent obligations are for high-risk systems, while some systems with unacceptable risk are prohibited from use
  • There are separate obligations for general purpose AI models, with additional obligations for models with systemic risk
  • There are hefty penalties for non-compliance – up to €35 million or 7% of global turnover

What is EU AI Act?

The EU AI Act is a comprehensive legal framework governing AI available on the EU market. As a product safety legislation, its purpose is to protect the fundamental rights, health, and safety of EU citizens against AI through its risk-based approach. Here, AI practices that conflict with European values and pose too high a threat to fundamental rights, health, and safety are prohibited, while systems posing a high risk must comply with stringent obligations.  

What is the EU AI Act’s risk-based approach?

Under the EU AI Act’s risk-based approach, obligations for AI systems are proportionate to the level of risk they present, taking into account factors such as the design and intended use. Based on risk level, the EU AI Act specifies corresponding requirements for documentation, auditing, transparency, and obligations, where there are three distinct levels of risk:

Three Distinct Levels of Risk - EU AI Act
  • Systems with unacceptable risk: AI Systems prohibited from being sold on the EU market under Article 5 of the Act due to their unacceptably high risk to fundamental rights, health, and safety.
  • High-risk systems: AI systems that are considered to pose a risk to health, safety, and fundamental rights but are nevertheless allowed on the market, provided that these systems meet certain requirements.
  • Low-risk systems: Systems that are neither prohibited nor high-risk. These include spam filters or AI-enabled video games and comprise the majority of the systems currently being used on the market. These systems do not have any obligations under the rules in their current form but must comply with existing legislation and may be subject to voluntary codes of conduct.

In addition to these three distinct risk levels, some systems may have limited risk or transparency risk AI systems in addition to their risk level above, providing the system is not prohibited. Systems with transparency risk are those that interact with end-users. Users of these systems must be informed that they are interacting with an AI system, that an AI system will be used to infer their characteristics or emotions, or that the content they are interacting with has been generated using AI. Examples are chatbots and deepfakes.

The Act also provides a separate risk-based classification for general-purpose AI (GPAI) models and associated obligations.

What are the high-risk systems under the AI Act?

Annex III of the EU AI Act defines eight high-risk use cases:

  • Biometric and biometric-based systems identification systems used for biometric identification and make inferences about personal characteristics, including emotion recognition systems
  • Systems for critical infrastructure and protection of the environment, including those used to manage pollution
  • Education and vocational training systems used to evaluate or influence the learning process of individuals
  • Systems influencing employment, talent management and access to self-employment
  • Systems affecting access and use of private and public services and benefits, including those used in insurance under the European Council’s presidency compromise text
  • Systems used in law enforcement, including systems used on behalf of law enforcement
  • Systems to manage migration, asylum and border control, including systems used on behalf of the relevant public authority
  • Systems used in the administration of justice and democratic processes, including systems used on behalf of the judicial authority

When an AI system is within one of these use cases, it is automatically considered high-risk. However, if the system does not pose a significant risk of harm to the health, safety or fundamental rights, then it will not be considered high-risk. This include scenarios where an AI system is intended to perform a narrow procedural task or to improve the result of a previously completed human activity.

In addition, an AI system is considered high risk if it is intended to be used as a safety component of a product or is a product covered by the list of Union harmonization legislation in Annex I and is required to undergo a third-party conformity assessment related to health and safety risks.

What are the obligations for high-risk systems?

Operators of high-risk systems are imposed with different obligations depending on their role. The most stringent obligations are for providers of high-risk AI systems, Among these obligations, the most crucial one is to ensure that a high-risk AI system meets the following technical requirements:

  • A continuous and iterative risk management system must be established throughout the entire lifecycle of the system (Article 9)
  • Data governance practices should be established to ensure the data for the training, validation, and testing of systems are appropriate for the system’s intended purpose (Article 10)
  • Technical documentation should be drawn up before the system is put onto the market (Article 11)
  • Record-keeping should be facilitated by ensuring the system is capable of automatic recording of events (Article 12)
  • Systems should be developed in a way to allow the appropriate transparency and provision of information to users (Article 13)
  • Systems should be designed to allow appropriate human oversight to prevent or minimise risks to health, safety, or fundamental rights (Article 14)
  • There should be an appropriate level of accuracy, robustness and cybersecurity maintained throughout the lifecycle of the system (Article 15)

Providers must also comply with operational obligations, including conducting a conformity assessment, drawing up a declaration of conformity, and creating a post-market monitoring system. To ensure compliance, conformity assessments must be carried out. The system must then be registered in the EU database and should bear the CE marking to indicate their conformity before it can be placed on the market.

On the other hand, obligations for developers include conducting a fundamental rights impact assessment to determine the potential effects as well as impacts of the deployment of a high-risk AI system.

What are the obligations for general-purpose AI models?

Obligations for general-purpose AI models include drawing up technical documentation and providing up a Union copyright policy.

In addition, some models may be considered as having high-impact capabilities that results in systemic risk if more than 1025 FLOP of computing power was used during their training. For these models, there are additional obligations such as conducting a model evaluation.

What practices are prohibited under the EU AI Act?

Article 5 prohibits the following practices that are deemed to pose too high of a risk:

  • The use of AI systems that deploy subliminal techniques beyond consciousness or purposefully manipulative or deceptive techniques that are deployed with the intention of materially distorting behaviour and impairing their ability to make an informed decision to cause significant harm
  • AI systems that exploit the vulnerabilities of a person or specific group of people – including groups based on personality traits, social or economic status, age, physical disability, or mental ability – with the objective of distorting behaviour to cause significant harm
  • Biometric categorization systems that categorise individuals according to sensitive or protected attributes or based on the inference of those attributes
  • Systems used for social scoring, evaluation or classification based on social behaviour or personal or personality characteristics if it leads to the negative treatment of individuals or groups outside of the context that the data was collected in or if the treatment is disproportionate to the social behaviour
  • Systems to assess the risk of (re)offending or a criminal or administrative offence using profiles of personality traits and characteristics or past criminal behaviour
  • Indiscriminate and untargeted scraping of biometric data from the internet (including social media) or CCTV footage to create or expand facial recognition databases
  • Real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement

Whether a given AI system is prohibited must be assessed on a case-by-case basis. The European Commission published guidance on 4 February 2025 on prohibited AI practices that can be used to guide this evaluation.

Who has to comply with the EU AI Act?

The EU AI Act imposes obligations on a number of parties such as importers, distributers, deployers, and operators, although the Act primarily applies to the providers of AI systems and GPAI models. The EU AI Act governs the EU market, meaning that entities placing their systems or models on the market or putting them into service within the EU must comply regardless of whether they are physically based in the EU.

What is the EU AI Act’s enforcement timeline?

The Act was published in the Official Journal of the EU on 12 July 2024, and entered into force as a whole starting from 1 August. However, it has a gradual application timeline:

EU AI Act’s Enforcement Timeline

What are the penalties for non-compliance?

Non-compliance comes with steep penalties of up to €35 million or 7% of global turnover, whichever is higher, for the use of prohibited systems. However, the severity of fines will depend on the level of transgression, with lower penalties of up to €7.5 million or to 1% of turnover for supplying incorrect, incomplete, or misleading information, for example.

Penalties for non-compliance under EU AI Act

Are there any exemptions?

The EU AI Act does not apply to certain AI systems and models, including:

  • AI systems used in research, testing, and development before they are placed on the market or put into service, providing they respect fundamental rights and are not tested in real-world conditions
  • Public authorities of third countries and international organizations when working within the framework of international agreements
  • AI systems exclusively developed or used for military purposes
  • AI components provided under free and open-source licenses, with the exception of high-risk AI systems or GPAI models with systemic risk.

Prepare for a global impact

The EU AI Act is a landmark piece of regulation and seeks to become the global gold standard for AI regulation with its sector-agnostic approach, which will help to ensure that there are consistent standards across the board to regulate AI. The rules impose obligations that are proportionate with the risk of the system, ensuring that potentially harmful systems are not deployed in the EU, while those associated with little or no risk can be used freely. Those that pose a high-risk will be constrained accordingly, while not preventing opportunities for innovation and development.

The Act will have far-reaching implications, affecting entities that interact with the EU market, even if they are based outside of the EU. There are considerable obligations to comply with, particularly for high-risk AI systems, and navigating the text is no small feat. Getting prepared early is the best way to ensure compliance and that obligations are met. To see how Holistic AI can help you ensure compliance, schedule a demo today.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo