First proposed on 21 April 2021, the European Commission’s proposed Harmonised Rules on Artificial Intelligence (EU AI Act), seek to lead the world in AI regulation and establish a global standard for protecting users of AI systems from preventable harm. Specifically, the rules aim to create an ‘ecosystem of trust’ that manages AI risk and prioritizes human rights in the development and deployment of AI.
The EU AI Act is a significant piece of legislation that could have a major impact on the development and use of AI in the European Union. The Act is still in the early stages of development - only having recently passed - but it is clear that it has the potential to shape the future of AI in Europe.
Since first being proposed, an extensive consultation process has resulted in a number of amendments being proposed to the rules in the form of compromise texts, with a draft general approach adopted on 6 December 2022. This text was then debated and revised ahead of a European Parliament vote on 26 April 2023 before a political agreement was then reached on 27 April before a key committee vote on 11 May 2023, where, by majority vote, leading parliamentary committees accepted the adopted version of the text. Following this, the European Parliament voted on the amended version of the text on 14 June 2023, accepting the text by a large majority and paving the way for Trilogues to commence. The first sweeping legislation of its kind, the Act will have implications for countless AI systems being used in the EU.
While AI and the associated automation can offer many benefits, such as increased efficiency and accuracy, using AI also poses novel technical, legal, and ethical risks. Indeed, scandals have affected multiple high-risk applications of AI:
While existing laws do apply to AI, such as GDPR, they alone are not sufficient to prevent AI from causing harm due to the novel risks it can pose.
It is forecasted that the Act will be enforced in 2026. That will be after the likely two-year implementation period that will follow the conclusion of the Trilogue stage, in which three institutions – the European Parliament, Council of the European Union, and the European Commission – align their respective positions on the AI Act. Trilogues began on 14 June 2023 and are expected to have produced a final text by the end of the year, ahead of the 2024 European Parliament elections.
This enforcement date is dependent on several stages of the EU legislative process, but significant progress has already been made. On 11 May 2023, the Civil Liberties and Internal Market committees of the European Parliament overwhelmingly approved proposed changes to the EU AI Act, with 84 votes in favour, seven against, and 12 abstentions. Similarly, the 14 June vote saw an overwhelming majority vote in favour of the Act, with 499 votes in favour, 28 against and 93 abstentions.
Under the Act, providers of AI systems established in the EU must comply with the regulation, along with those in third countries that place AI systems on the market in the EU, and those located in the EU that use AI systems. It also applies to providers and deployers (formerly users) based in third countries if the system's output is used within the EU.
AI systems used in research, testing, and development activities before it is placed on the market or put into service are exempt, providing they are conducted respecting fundamental rights and other applicable laws and are not tested in real-world conditions. Further, the regulation does not apply to public authorities of third countries and international organisations when working within the framework of international agreements, or for AI systems exclusively developed or used for military purposes. In addition, AI components provided under free and open-source licenses are excluded, with the exception of foundational models.
The EU AI Act outlines a risk-based approach, where the obligations for an AI system are proportionate to the level of risks it presents, taking into account factors such as the design and intended use. Based on risk level, the EU AI Act specifies corresponding requirements for documentation, auditing, transparency, and obligations. The Act establishes four distinct levels of risk, which are defined as follows:
According to Article 6, a system is considered high risk if it is intended to be used as a safety component of a product or is a product covered by the list of Union harmonization legislation in Annex II and is required to undergo a third-party conformity assessment related to health and safety risks.
Additionally, 8 high-risk use cases are listed in Annex III:
Such systems are considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of individuals. While there is currently a lack of clarity on how to determine whether this risk is met, 6 months prior to the regulation coming into force, the European Commission will consult with the AI Office and relevant stakeholders to provide clear guidelines specifying the circumstances where outputs from these systems would pose a significant risk to the health, safety or fundamental rights of natural persons. In addition, systems used for critical infrastructure are considered high-risk if they pose a significant risk of harm to the environment.
The most recent text also added that AI systems to influence voters in political campaigns and recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) could be considered high-risk, being categorised as a system used in the administration of justice and democratic processes.
A recent addition, the Act now allows providers of high-risk systems to notify relevant supervisory authorities if they do not deem their system to pose significant risks, defined by Article 3 as a risk that, as a result of its combined severity, intensity, probability, and duration, could significantly affect an individual or group. Upon receiving the notice, the authority will have three months to review and object if they do consider the system posing a significant risk.
High-risk systems are subject to more stringent requirements than any other category. Although obligations can vary by the type of entity associated with the system, in general, there are seven requirements for high-risk systems:
For providers of foundational models specifically, a description of the data sources used in development is also required. Additionally, identifications made by biometric systems cannot be used to inform actions or decisions unless the identification is verified by at least two people with the necessary competence, training, and authority.
To ensure compliance with the relevant obligations, conformity assessments must be carried out. The system must then be registered in the EU database and should bear the CE marking to indicate their conformity before it can be placed on the market. If there are any substantial modifications made to the system, including retraining on new data or adding or removing features from the model, the system must then undergo a new conformity assessment to ensure that obligations are still being met before it can be re-certified and registered in the database.
Article 5 prohibits the following practices that are deemed to pose too high of a risk:
It is imperative that organizations that may fall under the EU AI Act are aware of their obligations as non-compliance comes with steep penalties of up to €35 million or 7% of global turnover, whichever is higher. This severity of fines will depend on the level of transgression, ranging from using prohibited automated systems at the high end; to supplying incorrect, incomplete, or misleading information, at the low end which can result in fines of up to 10 million euros or up to 2% of turnover.
The EU AI Act will be a landmark piece of regulation and seeks to become the global gold standard for AI regulation with its sector-agnostic approach, which will help to ensure that there are consistent standards across the board to regulate AI. The rules impose obligations that are proportionate with the risk of the system, ensuring that potentially harmful systems are not deployed in the EU, while those associated with little or no risk can be used freely. Those that pose a high-risk will be constrained accordingly, while not preventing opportunities for innovation and development.
The Act will have far-reaching implications, affecting entities that interact with the EU market, even if they are based outside of the EU. There are considerable obligations to comply with, particularly for high-risk AI systems, and navigating the text is no small feat. Getting prepared early is the best way to ensure compliance and that obligations are met. To find out more about how Holistic AI can help you prepare to be compliant, get in touch at we@holisticai.com.
Here are some frequently asked questions (FAQs) about the EU AI Act, providing clarification on regulations surrounding artificial intelligence in the European Union.
Under the terms of the AI act, Unacceptable-Risk systems are those which are exploitative, manipulative, or use subliminal techniques. Systems classified as unacceptable risk are banned in the EU. Examples include real-time biometric identification technologies, or any other system that violates human rights.
The costs associated with violating the Act are severe. Failure to comply could result in a fine of up to €40million or 7% of global turnover, whichever is higher. These figures, which are taken from the latest version of the Act, are even higher than previous iterations.
High-Risk systems are those which have the potential to significantly impact the life chances of a user – systems used in democratic processes or law enforcement, for example. They have the most stringent reporting requirements.
Some argue that overly strict regulation in the domain of AI could stifle innovation, depriving the world of the immense societal benefits that the technology can bring. However, there is now a near-universal consensus among academics, lawmakers, and wider society that regulation is needed in order to mitigate the equally serious risks AI can pose.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts