European co-legislators reached a pivotal provisional agreement (Agreement) on the EU AI Act on 8 December 2023 during the last political trilogue, which lasted around 36 hours.
The EU’s ground-breaking legislative framework seeks to harmonise AI regulations across the EU, balancing innovation with fundamental rights and safety. As the EU positions itself at the forefront of digital regulation, the Agreement represents a crucial step towards formally adopting one of the world's first comprehensive AI legal frameworks. The EU AI Act promises to shape the future of AI in Europe and set a global benchmark for the responsible and ethical design, development, and deployment of AI systems. In this article, we give an overview of the latest updates.
Key Takeaways:
While an agreement has been made, the final text has not yet been agreed upon, with technical work to finalise the details expected over the coming weeks. Once completed, the Presidency will present the consensus text to the Member States representatives (Coreper) for their endorsement. Subsequently, the complete text must be ratified by both legislative institutions and will undergo a legal-linguistic review before it is formally adopted by the co-legislators, thus concluding the ordinary legislative procedure.
The Agreement delineates that the EU AI Act shall not apply to matters beyond EU law. It is designed not to interfere with the competencies of the Member States concerning national security nor intrude upon the responsibilities of any entities entrusted with tasks in this sector.
The EU AI Act shall not apply to systems solely deployed for military or defence objectives either. The Agreement also provides that the EU AI Act is not to apply to AI systems dedicated to the sole purpose of research and innovation, as well as to individuals using AI for non-professional reasons.
The final version of the definition remains unpublished. However, according to the Agreement, the definition of AI system in the EU AI Act will be aligned with the definition promulgated by the OECD.
The risk-based approach is preserved in the Agreement. Accordingly, some AI systems are prohibited, some are classified as high-risk, and others are subjected to certain transparency obligations depending on their functions.
Nevertheless, there has been some finetuning of the list of prohibited systems, which now exceeds the scope initially set forth by the Commission’s proposal. Yet, it remains more concise compared to the extensive list provided by the Parliament’s position.
According to the Agreement, the prohibited AI systems, also referred to as AI systems that present an unacceptable risk, include, but are not limited to, the following:
However, it should be noted that despite the prohibition of predictive policing, this prohibition does not exclude law enforcement agencies from using AI systems for crime and fraud prevention. Law enforcement agencies may use AI systems in crime analytics that do not attribute or correlate data to specific individuals but rather analyse anonymised trends.
Within the trilogue meetings, the issue of Real-Time Biometric Identification (RBI) has proven to be one of the most controversial topics.
The Agreement prohibits the use of RBI, with the exception of three narrowly defined use-case exceptions that necessitate prior judicial authorisation:
Further safeguards shall also fortify all these exceptions to the prohibition of RBI to ensure due process and proportionality.
The list of high-risk AI systems (HRAIs) and associated requirements have been updated. However, the horizontal approach to the classification of HRAIS is preserved in the Agreement.
Certain HRAI requirements such as data quality and technical documentation have been amended to render these more technically feasible and less burdensome for stakeholders to comply with. The Agreement provides amendments clarifying the allocation of responsibilities of different actors, primarily the providers and users, in the AI system supply chain.
For example, certain public entity users of HRAIS shall be required to register in a dedicated EU database. The Agreement also clarifies the intersection of obligations under the EU AI Act and other Union legislations like the GDPR and sets forth a mandatory Fundamental Rights Impact Assessment (FRIA), which is now applicable to the banking and insurance sectors as well.
General purpose AIs (GPAIs) are AI systems that can be used for many purposes. These can be either standalone systems or integrated into another HRAI. Foundation models are now regulated under GPAIs. In doing so, the Agreement identifies two tiers of GPAIs., i.e. models associated with low and high systemic risks.
Accordingly, the models that have a compute threshold above 1025 FLOP shall be considered as posing high systemic risk and subjected to more stringent requirements. These include undertaking model evaluations, assessing and mitigating systemic risks, conducting adversarial testing, and reporting to the Commission on serious incidents, energy efficiency and cybersecurity safeguards. GPAIs that are used in research and development, as well as those that are in the pre-training phase, are exempt from these requirements. Similarly, some of these requirements do not apply to GPAIs that are made public under an open-source license.
Currently, very few models such as GPT-4, and conceivably Llama 2 and Gemini, exceed this threshold. Most models developed by the European enterprises, however, are either in the research and development phase and will be categorised under the low-risk tier upon product realisation, which is subject to less stringent obligations.
The European co-legislators acknowledge the potential necessity of revising the Act to ensure its future-proof application, incorporating a mechanism for updates to be made by the AI Office.
Generative AI systems may be classified as GPAI or HRAIs. In this case, they will be subjected to the respective requirements. However, the interplay between the rules on HRAIs and GPAIs is not yet clear. Additionally, AI-generated content shall be watermarked, and all generative AI systems shall be required to comply with the existing Union copyright legislation.
The Agreement also makes updates concerning AI regulatory sandboxes, which are controlled environments for the development, testing and validation of innovative AI systems. Specifically, the Agreement added the testing of AI systems in real-world conditions under specific conditions and safeguards.
In order to facilitate the administrative burden for smaller companies, the Agreement also introduces a list of actions to be undertaken to support such operators and provides for clear derogations.
The fines for violations of the EU AI Act are set as a percentage of the violating actor’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for the violations of the provisions regarding the prohibited AI systems, €15 million or 3% for the violations of the requirements, and €7.5 million or 1.5% for the supply of incorrect information. However, the Agreement provides for more proportionate caps on administrative fines for SMEs and start-ups.
The compromise agreement also makes clear that a natural or legal person may make a complaint to the relevant market surveillance authority concerning non-compliance with the EU AI Act.
The AI Office within the Commission shall be set up immediately, tasked to oversee these most advanced AI models, contribute to fostering standards and testing practices, and coordinate the uniform enforcement of the rules in the Member States.
A scientific panel of independent experts shall be created to advise the AI Office about GPAI models on the designation and the development of high systemic risk models and systems and monitor possible material safety risks related thereto.
The AI Board, which would comprise the Member States’ representatives, shall serve as a coordination platform and an advisory body to the Commission and shall enable the participation of the Member States in the implementation of the regulation, including the design of codes of practice for GPAIs and underlying models. Last but not least, an advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, shall be set up to provide technical expertise to the AI Board.
In principle, according to the Agreement, the Act shall enter into force two years from its formal adoption, which is anticipated in early 2024. However, there is a gradual entry into force for some provisions of the EU AI Act. The provisions pertaining to prohibitions are to be effective within six months, whereas the transparency and governance provisions shall apply after twelve months following the formal adoption.
During the grace period, covered entities will be encouraged to engage in voluntary compliance initiatives, notably the AI Pact, as orchestrated by the Commission, to facilitate a smooth transition to the new regulatory environment.
The Agreement on the EU AI Act is a landmark event that will likely pave the way for future AI legislation and set the global gold standard for doing so. With this Agreement reached, the race for compliance has begun but cannot happen overnight. Taking action early is the best way to gain a competitive edge and maximise innovation while minimising legal liability. Schedule a free demo with our experts to find out how Holistic AI can support you with EU AI Act Compliance.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts