Key takeaways:
The EU AI Act is the world’s first comprehensive legal framework governing AI. Taking a risk-based approach, the EU AI Act prohibits systems with unacceptable levels of risk to health, safety, or fundamental rights and imposes stringent requirements for high-risk systems and transparency requirements for providers or users of certain AI systems. In this blog post, we outline the steps you need to take to determine whether your AI systems are high-risk and what you need to do if they are.
Under the AI Act, there are two ways that a system be classified as high-risk:
An AI system is high-risk if it:
The harmonisation legislation in scope is listed in Annex I and includes legislation related to products such as radio equipment, in vitro diagnostic medical devices, civil aviation security, and the rail system.
Annex III lists eight key use cases for AI systems that are considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons:
However, these systems might not be considered high-risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. This can be the case where the system is intended to:
If any of the above are true, providers can document an assessment of such before the system is placed on the market or deployed. Documentation of this assessment may be requested by national competent authorities.
This is with the exception of systems that perform the profiling of natural persons in accordance with the GDPR; systems listed in Annex III that are used for profiling are automatically considered high-risk.
There are seven key design-related requirements that providers of high-risk AI systems must meet under the EU AI Act:
Compliance with these requirements is just one of the obligations of providers of high-risk AI systems. Other obligations include CE marking, system registration, and ensuring that the high-risk AI system complies with accessibility requirements. There are also specific obligations for other actors such as deployers, importers, distributors, and authorised representatives.
Some high-risk AI systems may also carry transparency requirements for providers and deployers, in addition to the above obligations, as follows:
When providers combine GPAI models with other elements, such as user interfaces or feedback mechanisms, they form an AI system. Once integrated into a system, if that system is considered high-risk, then providers must comply with obligations for both GPAIs and high-risk systems.
General-purpose AI (GPAI) models are dealt with separately to AI systems under the EU AI Act and have their own dedicated obligations, which are primarily targeted towards providers:
Drawing up a technical documentation and keeping it up to date.
In addition to these obligations, GPAI with a cumulative amount of computation used for its training measured in floating point operations greater than 1025 are assumed to have high-impact capabilities and are therefore considered to pose systemic risk.
For GPAI model with systemic risk, in addition to the obligations above, providers of GPAI models with systemic risk must:
Obligations for high-risk systems are set to take effect from 2 August 2026 with hefty penalties for non-compliance. To prepare for compliance with these obligations, follow these steps:
Ensure your AI systems comply with the EU AI Act and mitigate regulatory risks with confidence. Schedule a demo with Holistic AI today to streamline governance and safeguard your AI deployments!