🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Identify High-Risk AI Systems Under the EU AI Act [2025 Guide]

Authored by
Osman Gazi Güçlütürk
Legal & Regulatory Lead in Public Policy at Holistic AI
Published on
Dec 1, 2024
read time
0
min read
share this
Identify High-Risk AI Systems Under the EU AI Act [2025 Guide]

The EU AI Act is the world’s first comprehensive legal framework governing AI across use cases.  Following a lengthy consultation process since it was first proposed in April 2021 – which saw members states and Union institutions propose comprehensive amendments – a political agreement was reached in December of 2023. The text based on this agreement is now going through the final stages of the EU law-making procedure, and was approved by the EU Parliament committees on 13 February and will be voted on by the Parliament plenary in March. The text preserves the risk-based approach for AI systems, where requirements are proportionate to the level of risk posed by a system, while introducing another risk-based classification for general purpose AI (“GPAI”) models.

This guide serves as a starting point for organizations seeking to determine the level of regulatory risk their systems pose in the EU.

Who will the EU AI Act affect?

The objective of the Act is to protect fundamental rights and prevent harm by regulating AI use within the European Union. This includes not only EU-based entities but also any organization that employs AI in interactions with EU residents, due to the Act’s extraterritorial reach.

The Act categorizes AI systems and delineates different responsibilities for different parties based on the system's risk level. It's crucial for all participants in the AI system's lifecycle to understand their role and the risk classification of their system. Those with obligations under the Act are providers, deployers, importers, and distributors of AI systems, as well as the authorized representatives of providers located outside the EU. Providers—those who develop, train, or market AI systems—are subject to the most comprehensive obligations.

How does the EU AI Act the classify AI systems?

Identification of the role of an entity (operator) and the classification of the AI system in question are crucial steps to prepare for the Act. Naturally, operators must first create a comprehensive inventory of their AI assets. This allows them to determine whether they possess an AI system or an AI model, leading to a bifurcated assessment process: one for AI systems and another for GPAI models.

For AI models, the initial step is to determine if they qualify as GPAI models. If this is the case, a further assessment is needed to establish if they possess high-impact capabilities, which would classify them as GPAI models with systemic risk.

For AI systems, two simultaneous evaluations are necessary. The first is to ascertain the system's risk category. The second is to determine if the system's operations invoke additional transparency requirements.

In instances where an AI system incorporates a GPAI model, the system is designated as a GPAI system under the Act. In such cases, the system is subject to both the requirements associated with its risk level and the GPAI-model assessments independently.

The First Prong of the AI System Classification: What are the risk categories for AI systems?

AI systems under the Act are assigned to one of three distinct risk categories:

  1. Systems that are deemed to pose an unacceptable risk are prohibited outright.
  1. Systems identified as high-risk are subject to rigorous design and operational requirements.
  1. Systems that are categorized as minimal risk are not subjected to any mandatory regulatory framework.

Notably, each system is exclusively categorized into only one of these risk categories. For a proper classification, the evaluation must be started from the question of whether the system is prohibited, taking a top-down approach. If is it not, then this must be followed by an assessment of whether the system is high-risk.

How do I know if my AI system is prohibited under the EU AI Act?

When evaluating the risk level of an AI system, the first step is to determine if it falls under any of the prohibited categories outlined in Article 5 of the EU AI Act. This article specifies both absolute prohibitions and certain exceptions. Key prohibitions include:

  • The use of subliminal techniques in AI systems that can significantly undermine a person's ability to make decisions freely and with sufficient information.
  • AI applications that target individuals' vulnerabilities related to age, physical or mental disability, or socioeconomic status, leading to undue influence.
  • Biometric categorization systems that infer sensitive personal data, except in strictly regulated situations.
  • Social scoring by public authorities that can result in negative consequences based on social behavior or perceived personality traits.
  • Real-time biometric identification systems used by law enforcement, except in tightly controlled cases of imminent threat or for identifying individuals suspected of serious crimes.

These prohibitions aim to safeguard personal autonomy, prevent unfair discrimination, and uphold public safety, while also protecting privacy and fundamental human rights.

How do I know if my AI system is high-risk under the EU AI Act?

If an AI system is not prohibited, the subsequent step is to evaluate whether it is a high-risk system as per Article 6 of the Act. The Act outlines three principal scenarios where an AI system may be considered high-risk:

  1. AI systems that involve profiling individuals in accordance with GDPR are automatically deemed high-risk.
  1. AI systems that fall under specific EU harmonization legislation (referenced in Annex II) and are subject to conformity assessments as per their sectoral legislation are categorized as high-risk.
  1. AI systems operating in sectors listed in Annex III are also classified as high-risk and must comply with stringent design criteria and operator obligations. These systems broadly include:
  • Biometric and emotion recognition systems, with certain exceptions for verification.
  • AI managing critical infrastructure such as traffic and utilities.
  • AI in education for admissions, grading, and monitoring student behavior.
  • AI in employment for hiring, performance reviews, and task assignment.
  • AI for essential public services like social welfare, credit scoring, and emergency response.
  • Law enforcement tools for profiling and evaluating evidence in criminal investigations.
  • AI used in migration and border control for risk assessment and document verification.
  • AI supporting judicial processes, including evidence evaluation, legal interpretation, and dispute resolution.

An important update in the latest version of the Act is the provision that systems used in these sectors are automatically classified as high-risk, but the provider has the opportunity to argue that their system should not be deemed high-risk if they can substantiate that it does not pose a significant risk to people's health, safety, or fundamental rights.

What are the obligations for high-risk systems under the AI Act?

There are seven key design-related requirements for the high-risk AI systems under the EU AI Act:

  • Establishment of a risk management system
  • Maintaining appropriate data governance and management practices
  • Drawing up a technical documentation
  • Record-keeping
  • Ensuring transparency and the provision of information to deployers
  • Maintaining appropriate level of human oversight
  • Ensuring appropriate level of accuracy, robustness, and cybersecurity

However, these requirements must not be confused with obligations imposed on operators. In fact, ensuring that their AI system meet these requirements is only one of the obligations of providers.  

How do I know if my system is low (or minimal) risk?

AI systems that do not fall into the prohibited or high-risk categories are considered to have minimal risks. These systems do not have mandatory requirements but are encouraged to adhere to voluntary codes of conduct.

The Second Prong of the AI System Classification: What are the so-called limited risk AI systems and how do I know if my system is one of these?

Within the framework of the EU AI Act, there exists a classification for AI systems often termed "limited risk AI systems". These systems do not fit into the exclusive risk categories but are recognized for the specific risks they pose during user interactions. Article 52 of the Act sets forth a series of transparency obligations for providers or users of certain AI systems to mitigate these risks:

  • AI systems designed for direct interaction with users must reveal their AI-driven nature unless their artificiality is self-evident, or disclosure is waived for purposes like crime prevention or investigation.
  • Synthetic content creators (audio, image, video, text) are required to label their outputs as artificially generated or edited in a machine-readable format, with exceptions for purposes such as editorial assistance or authorized law enforcement activities.
  • Deployers of emotion recognition or biometric categorization AI must notify individuals within the exposure range, in addition to adhering to personal data protection laws, unless law enforcement permissions override this requirement.
  • Creators of content that could be confused with real human outputs (such as deepfakes) are obliged to declare the AI involvement, with artistic, satirical, or law enforcement exceptions acknowledged.
  • AI-generated text that is made public should be identified as AI-generated, unless it has undergone human review or falls under specific legal exemptions.

These directives are applicable across the spectrum of AI systems, whether they are considered high-risk or low risk. Hence, compliance with these obligations is a separate process and needs to be evaluated alongside the risk category determination.

Where do the general-purpose AI model provisions fit in this frame?

General-purpose AI (GPAI) models, previously referred to as foundation models in the Parliament's negotiations, have been given a dedicated chapter in the most recent version of the Act.

The provisions for GPAI models come into play specifically when such a model is part of the AI system under consideration. Providers must first evaluate whether their GPAI model carries systemic risk, characterized by high-impact capabilities. If identified as such, the model is subject to additional and more rigorous technical requirements.

What are the obligations of GPAI models under the AI Act?

The Act imposes obligations primarily on the providers of GPAI models. These obligations are as follows:

  • Drawing up a technical documentation and keeping it up to date.
  • Drawing up and making available information and documentation to providers of AI systems who intend to integrate the GPAI model in their AI systems.
  • Putting up in place a policy to respect the EU copyright legislation.
  • Drawing up and making publicly available a detailed summary about the content used for training of the GPAI model.  
  • Cooperating with the Commission and the national authorities as necessary.

What are GPAI models with systemic risk?

In the framework of the Act, a specific and more stringent regulatory regime is applied to certain GPAI models. This is due to their expansive capabilities and the consequential potential impact they may have. To ascertain whether a GPAI model constitutes a GPAI model with systemic risk, an evaluation of methodologies, technical indicators, and benchmarks is conducted.

A key aspect of this determination process is a presumption established by the Act: GPAI models that necessitate a cumulative computational power exceeding 1025 FLOPs (floating-point operations) are assumed to have high-impact capabilities. This benchmark serves as a heuristic for identifying models with significant potential effects, thereby subjecting them to the Act's stricter regime.

What are the obligations linked to GPAI models with systemic risk?

In addition to obligations mentioned above, providers of GPAI models with systemic risk must do the following:

  • Perform model evaluations in accordance with the state-of-the-art tools and practices.
  • Assess and mitigate possible systemic risks at Union level.
  • Keep track of relevant information about serious incidents, report these to authorities, and identify possible corresponding corrective actions.
  • Ensure that an adequate level of cybersecurity protection is in place for the model and its physical infrastructure.

What will happen if I do not comply with the EU AI Act?

The Act provides a set of hefty penalties for non-compliance with its provisions. The amount of the fine changes depending on the role of the infringer and the seriousness of the infringement.

Get started with your AI Act preparedness!

The best way to ensure that your systems are ready for the Act to avoid penalties is to take steps early. No matter the stage of development of the system, the classification of the AI systems is of paramount importance. Then a risk management framework can be developed and implemented to prevent potential future harm. Getting ahead of this regulation will help you to embrace your AI with confidence.

Schedule a call to find out more about how Holistic AI’s can help you on your journey for the AI Act preparedness.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo