On 14 June 2023, the European Parliament voted to move forward with the EU AI Act, which was previously voted through by European Committees on 11 May 2023. First proposed on 21 April 2021, the Act, formally known as Harmonised Rules on Artificial Intelligence, seeks to lead the world in AI regulation and create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights in the development and deployment of AI. The first sweeping legislation of its kind, the Act will have implications for countless AI systems used in the EU – and the countdown for compliance has begun.
In this blog post, we give a high-level overview of what businesses need to know about who the AI Act will affect, how it will regulate AI in the EU, obligations for high-risk systems, and how you can start to prepare.
The EU AI Act is set to have implications for providers of AI systems used in the EU, whether they are located in the EU or a third country. The legislation also applies to deployers of AI that are established or located in the EU and distributors that make AI systems available on the EU market. There are also implications for entities that import AI systems from outside the EU, as well as product manufacturers and authorised representatives of providers and operators of AI systems. Therefore, the Act will have a global reach, affecting many parties around the world involved in the design, development, deployment, and use of AI systems within the EU.
In the interests of balancing innovation and safety, AI systems used in research, testing and development will be exempt from the legislation, providing that they are not tested in real-world conditions and that they respect fundamental rights and other legal obligations.
Public authorities of third countries and international organisations working within international agreements and systems exclusively for military purposes will also be excluded, along with AI components provided under free and open-source licenses unless they are foundational models.
Under the EU AI Act, artificial intelligence is defined as:
“A machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”
This definition was revised in the May text to make it closer to the OECD definition of AI, creating greater standardisation on what AI means and therefore the systems in scope of AI regulation.
EU citizens are at the heart of the regulation, which seeks to introduce safeguards to minimise preventable harm. However, the act also strives to ensure that the obligations do not stifle innovation. Accordingly, the EU AI Act takes a risk-based approach to regulating AI, where obligations are proportional to the risk posed by a system based on four risk categories:
Under Article 6, a system is considered high-risk if it is intended to be used as a safety component of a product or if it is covered by the EU harmonisation legislation listed in Annex II and it is required to undergo a third-party conformity assessment related to health and safety risks.
This includes products covered by laws relating to the safety of toys, lifts, pressure equipment, and diagnostic medical devices, per Annex II.
Annex III also lists 8 use cases that are considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. To aid this evaluation, 6 months before the implementation of the law, the European Commission will provide guidance on the circumstances where outputs from these systems would pose a significant risk to the health, safety or fundamental rights of natural persons following a consultation with the AI Office and relevant stakeholders. The eight use cases are:
Systems used for the biometric identification of individuals and systems used to infer the personal characteristics of individuals based on their biometric or biometric-based data. This includes emotion recognition systems but does not apply to identification systems used to confirm the identity of a specific person.
This category contains an additional criterion for a system to be considered high-risk – whether it poses a significant risk of harm to the environment. It includes systems used for the management and operation of road, rail, and air traffic (unless regulated by harmonised or sector-specific legislation) as well as systems intended to be used as safety components for the management and operation of water, gas, heating, electric, or critical digital infrastructure supply.
Systems used to determine access or influence decisions on admission or assignment to educational and vocational training institutions. In addition, systems used to assess students in or for admission to educational and training institutions would be in the scope of the legislation, as would systems used to determine or influence the appropriate level of education for an individual. Finally, systems used to monitor and detect prohibited student behaviour would be considered high-risk.
Systems intended to be used for recruitment or selection, are considered high-risk. This includes systems used to place targeted job ads, systems used to evaluate performance in an interview or test, and systems to screen or filter applications or evaluate candidates in tests or interviews. As well as systems used in hiring, those used to make decisions about promotion, termination, and task allocation based on behaviour or personal characteristics and systems to monitor and evaluate performance and behaviour would also be considered high-risk.
AI systems used by or on behalf of public authorities to evaluate eligibility for benefits and services, including healthcare, housing, electricity, heating/cooling, and internet. This also includes systems used to grant, revoke, increase, or reclaim these benefits and services.
This also covers systems used for credit scoring, excluding systems used to detect financial fraud and systems used to make or influence decisions about eligibility for health and life insurance. Further, systems used to evaluate and classify emergency calls or used to dispatch or determine the priority of dispatch of first responders including by police, firefighters, and medical aid and emergency healthcare.
AI systems used by or on behalf of law enforcement or by EU agencies or bodies as part of polygraphs or similar tools, to evaluate the reliability of evidence in the investigation or prosecution of criminal offences, for the profiling of individuals in the course of detection, investigation, or prosecution of criminal offences, or those used for crime analytics to search complex large data sets to identify unknown patterns or discover hidden relationships in the data.
Systems used on behalf of public authorities or by EU agencies – such as polygraphs or similar tools – to assess security, health or irregular immigration risks of an individual entering a Member State, to verify the authenticity of travel documents, and those used to assess applications for asylum, visa, and residence permits and associated complaints.
This also includes systems used to monitor, surveil, or process data for border management activities to detect, recognise, or identify individuals, and systems used to forecast or predict trends related to migration movement and border crossing.
AI systems used by or on behalf of a judicial authority to assist in the research and interpretation of facts and the law and applying the law to a set of facts. This also includes systems intended to be used to influence the voting behaviour of individuals or the outcome of an election or referendum, excluding systems whose outputs are not directly exposed to individuals, including systems used to organise, optimise and structure political campaigns.
A recent addition here is AI systems intended to be used by social media platforms designated as Very Large Online Platforms under the Digital Services Act (currently platforms with more than 45 million users).
Obligations for high-risk systems vary by the type of entity associated with the system, but there are seven broad obligations to comply with:
Providers of foundational models also have an additional requirement – they must provide a description of the data sources used in development. Additionally, biometric systems used to identify individuals must first have their output verified by at least two people with the necessary competence, training, and authority before they can be acted on.
Compliance with these obligations must be confirmed through a conformity assessment, with those passing the assessment required to bear the CE marking before they are placed on the market – a digital marking for digital systems and a physical marking for physical systems. These systems must also be registered in a public database. This procedure must be repeated in the event of any significant modifications to the system, such as if the model is retrained on new data or some features are removed from the model.
In addition to potential reputational damage resulting from non-compliance, the Act will also impose steep penalties of up to €30 million or 6% of global turnover (whichever is higher) for non-compliance. The severity of the fine will depend on the severity of the offence, with using prohibited systems at the high end and supplying incorrect, incomplete, or misleading information, at the low end, which can result in fines of up to €10 million or up to 2% of turnover. This is similar to the fines set out by GDPR, which are up to €20 million or 4% of total global turnover for severe violations or €10 million or 2% of global turnover for less serious offences. It is therefore vital that organisations are aware of their obligations to avoid financial and reputational impacts.
The EU AI Act seeks to set the global standard for AI regulation, affecting entities around the world that operate in the EU or interact with the EU market, regardless of whether they are located in the EU. Its sector-agnostic approach will help to ensure that there are consistent standards across the board to regulate AI and that the obligations are proportionate with the risk of the system, guaranteeing that potentially harmful systems are not deployed in the EU, while those associated with little or no risk can be used freely. Those that pose a high risk will be constrained accordingly, while not preventing opportunities for innovation and development. EU citizens are put at the heart of the regulation, with an emphasis on protecting their fundamental rights and protecting them from preventable harm.
Compliance with the EU AI Act requires a significant commitment from businesses that are developing or deploying systems in the EU. The text of the Act is lengthy and navigating it is no easy feat. With around two and a half years to go until enforcement, it is crucial that businesses use this preparatory period wisely to build up their readiness. Ensuring compliance is a multi-dimensional task that demands the establishment of robust governance structures, building internal competencies, and implementing requisite technologies.
To prepare for compliance, companies must:
Holistic AI’s proprietary Governance platform is a world-class solution to AI risk management that can be implemented throughout the entire lifecycle of AI systems, minimising their risk from design right through to deployment. Broadly, there are three steps:
Holistic AI is dedicated to assisting organisations to achieve compliance with the EU AI Act through its comprehensive suite of solutions, having conducted over 1000 risk mitigations. Leveraging the power of the Holistic AI's Governance Platform can help you with:
AI Governance:
AI Compliance:
Integration and Workflow Enhancement:
Transparency:
Schedule a demo to find out more about how Holistic AI can help you prepare to be compliant.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts