Key takeaways:
- Systems that have a significant risk to fundamental rights, health, or safety are considered high-risk and are subject to stringent obligations
- A system may be considered high-risk system if it is listed in Annex III or subject to conformity assessments under Union harmonised product legislation listed in Annex I.
- High-risk systems, depending on their functions, may also be subject to transparency requirements
- General purpose AI models integrated into high-risk AI systems are subject to both high-risk AI system requirements and general-purpose AI model requirements
The EU AI Act is the world’s first comprehensive legal framework governing AI. Taking a risk-based approach, the EU AI Act prohibits systems with unacceptable levels of risk to health, safety, or fundamental rights and imposes stringent requirements for high-risk systems and transparency requirements for providers or users of certain AI systems. In this blog post, we outline the steps you need to take to determine whether your AI systems are high-risk and what you need to do if they are.
How are systems classified as high-risk under the EU AI Act?
Under the AI Act, there are two ways that a system be classified as high-risk:
- If it falls under specific EU harmonization legislation and is subject to conformity assessments as per its sectoral legislation.
- If it listed as one of the applications in Annex III are classified as high-risk and must comply with stringent design criteria and operator obligations
Systems covered by harmonization legislation
An AI system is high-risk if it:
- Is a safety component of a product or is itself a product that is covered by certain Union harmonization legislation and
- Is required to undergo a third-party conformity assessment under that harmonization legislation
The harmonisation legislation in scope is listed in Annex I and includes legislation related to products such as radio equipment, in vitro diagnostic medical devices, civil aviation security, and the rail system.
Use cases specified in Annex III
Annex III lists eight key use cases for AI systems that are considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons:
- Biometric and emotion recognition systems, with certain exceptions for verification.
- AI managing critical infrastructure such as traffic and utilities.
- AI in education for admissions, grading, and monitoring student behavior.
- AI in employment for hiring, performance reviews, and task assignment.
- AI for essential public services like social welfare, credit scoring, and emergency response.
- Law enforcement tools for profiling and evaluating evidence in criminal investigations.
- AI used in migration and border control for risk assessment and document verification.
- AI supporting judicial processes, including evidence evaluation, legal interpretation, and dispute resolution.
However, these systems might not be considered high-risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. This can be the case where the system is intended to:
- Perform a narrow procedural task
- Improve the result of a previously completed human activity;
- Detect decision-making patterns or deviations from prior decision-making patterns without replacing or influencing the human assessment without being reviewed by a human
- Perform a preparatory task to an assessment relevant for one of the eight use cases
If any of the above are true, providers can document an assessment of such before the system is placed on the market or deployed. Documentation of this assessment may be requested by national competent authorities.
This is with the exception of systems that perform the profiling of natural persons in accordance with the GDPR; systems listed in Annex III that are used for profiling are automatically considered high-risk.
What are the obligations for high-risk systems under the AI Act?
There are seven key design-related requirements that providers of high-risk AI systems must meet under the EU AI Act:
- Establishment of a risk management system
- Maintaining appropriate data governance and management practices
- Drawing up a technical documentation
- Record-keeping
- Ensuring transparency and the provision of information to deployers
- Maintaining appropriate level of human oversight
- Ensuring appropriate level of accuracy, robustness, and cybersecurity
Compliance with these requirements is just one of the obligations of providers of high-risk AI systems. Other obligations include CE marking, system registration, and ensuring that the high-risk AI system complies with accessibility requirements. There are also specific obligations for other actors such as deployers, importers, distributors, and authorised representatives.
Transparency obligations for certain AI systems
Some high-risk AI systems may also carry transparency requirements for providers and deployers, in addition to the above obligations, as follows:
- AI systems designed for direct interaction with users must declare that they are AI-driven unless their artificiality is self-evident, or disclosure is waived for purposes like crime prevention or investigation.
- Synthetic content creators (audio, image, video, text) are required to label their outputs as artificially generated or edited in a machine-readable format, with exceptions for purposes such as editorial assistance or authorized law enforcement activities.
- Deployers of emotion recognition or biometric categorization AI must notify individuals within the exposure range, unless overridden by law enforcement permissions.
- Creators of content that could be confused with real human outputs (such as deepfakes) are obliged to declare that content is AI-generated, with artistic, satirical, or law enforcement exceptions acknowledged.
- AI-generated text that is made public should be identified as such, unless it has undergone human review or falls under specific legal exemptions.
General-purpose AI models in high-risk systems
When providers combine GPAI models with other elements, such as user interfaces or feedback mechanisms, they form an AI system. Once integrated into a system, if that system is considered high-risk, then providers must comply with obligations for both GPAIs and high-risk systems.
Obligations for general-purpose AI models
General-purpose AI (GPAI) models are dealt with separately to AI systems under the EU AI Act and have their own dedicated obligations, which are primarily targeted towards providers:
Drawing up a technical documentation and keeping it up to date.
- Drawing up and making available information and documentation to providers of AI systems who intend to integrate the GPAI model in their AI systems.
- Putting up in place a policy to respect the EU copyright legislation.
- Drawing up and making publicly available a detailed summary about the content used for training of the GPAI model.
- Cooperating with the Commission and the national authorities as necessary.
Obligations for GPAI models with systemic risk
In addition to these obligations, GPAI with a cumulative amount of computation used for its training measured in floating point operations greater than 1025 are assumed to have high-impact capabilities and are therefore considered to pose systemic risk.
For GPAI model with systemic risk, in addition to the obligations above, providers of GPAI models with systemic risk must:
- Perform model evaluations in accordance with the state-of-the-art tools and practices.
- Assess and mitigate possible systemic risks at Union level.
- Keep track of relevant information about serious incidents, report these to authorities, and identify possible corresponding corrective actions.
- Ensure that an adequate level of cybersecurity protection is in place for the model and its physical infrastructure.
Prepare for compliance with high-risk AI system obligations
Obligations for high-risk systems are set to take effect from 2 August 2026 with hefty penalties for non-compliance. To prepare for compliance with these obligations, follow these steps:
- Create an inventory of AI systems and for each
- Confirm the system is not prohibited under the EU AI Act
- Confirm whether the system is listed in Annex III or subject to conformity assessments under Union harmonization legislation
- If affirmative, check whether any of the exceptions to being considered high-risk apply
- If not affirmative, check whether the system is supported by a general-purpose AI model and if such model has systemic risk
- Confirm your role as a provider or deployer etc to determine your specific obligations
- Start your compliance journey with relevant obligations, including for GPAIs if applicable
Ensure your AI systems comply with the EU AI Act and mitigate regulatory risks with confidence. Schedule a demo with Holistic AI today to streamline governance and safeguard your AI deployments!