🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

EU AI Act: Summary of Updates on Final Compromise Text

Authored by
Ashyana-Jasmine Kachra
Policy Associate at Holistic AI
Published on
Dec 13, 2022
read time
0
min read
share this
EU AI Act:  Summary of Updates on Final Compromise Text

Key Takeaways

  • The latest and final compromise text of the EU AI Act (adopted on 6 December 2022) marks the EU ministers' official greenlight to adopt a general approach to the AI Act.
  • The updates to the final text make it clear that the legal framework aims to strike a balance between fundamental rights and the promotion of AI innovation.
  • Defining AI: The final text includes a narrower definition of AI as member states were concerned that a broader definition would also include software generally.
  • General Scope: Three key changes were made to the Act's scope relating to inclusion, exclusion and expansion.
  • Governance: Four key changes were made in the area of governance relating to the AI Board, regulatory sandboxes and penalties.
  • Prohibition: One key change was made when the prohibition of social scoring was extended to private actors.
  • Designating High Risk-Systems: Three key changes were made, including vaguer parameters classifying high-risk systems; upon implementation, the commission can add and remove (under certain conditions) high-risk use cases to Annex III and increased transparency requirements for high-risk systems, including public-body users.
  • Compliance Feasibility for High-Risk Systems: The final text clarifies and adjusts the requirements for high-risk systems.
final compromise text of the EU AI Act

1. The EU AI Act compromise text at a glance

The European Commission aims to lead the world in Artificial Intelligence (AI) regulation with its proposal for Harmonised Rules on Artificial Intelligence (known as the EU AI Act). It seeks to lay out a normative framework so that risks of AI systems are managed and mitigated to ensure building trust in AI systems in the EU. The Regulation proposes a risk-based classification for AI systems, which defines four levels of risk: minimal, limited, high, and unacceptable.

The Proposal for the EU AI Act of April 2021 has undergone an extended consultation process and has had different amendments since then, such as the Parliament’s reports and texts of the French and Czech presidencies. For example, prior drafts of the proposal prioritise the fundamental rights of individuals and risks against these rights, however, the Czech Presidency has aimed to strike a balance between fundamental rights and the promotion of AI.

2. Prominent updates

The latest and final compromise text of the EU AI Act (adopted on 6 December 2022) marks the EU ministers’ official greenlight of adopting a general approach to the AI Act. The following table summarises and briefly explains updates from the Czech Presidency’s final compromise text:

2.1 Defining AI

Where the original definition of AI in the Act was conceived to futureproof, the final text has changed the definition of AI to be narrower as member states were concerned that a broader definition would include software too generally. The new definition, “systems developed through machine learning, logic- and knowledge-based approaches,” can be amended by the Commission in later delegate acts. This is an example of an update to the text that indicates a step forward towards AI innovation within the framework.

2.2 General scope

⦿ The original scope of the AI Act was intended only to cover objective-based systems, where general AI (language models that can do various tasks) was omitted.

  • After concern expressed by member states, the Czech Presidency has tasked the commission with carrying out an impact assessment and consultation to see which rules need to be adapted for general AI.
  • Efforts will be delegated to define GPAI systems, classify them as high-risk and regulate them in certain conditions. This could also include open-source models. This would become relevant one and a half years after the regulation comes into force.

⦿ The final text clarifies that national security, defence, and military purposes are excluded from the scope of the Act.

  • This has been explained that using AI for the sole purpose of research & development falls outside the scope of the EU Act.
  • This exclusion ensures that in extenuating circumstances these agencies can use remote real-time biometric information in extenuating circumstances

⦿ The scope of the term “vulnerability” in the Act,  was also extended to include socio-economic vulnerabilities.

2.3 Governance

Emphasizing the role of human oversight in the governance of the artificial intelligence Act, provisions relating to the AI Board have been modified. This is to ensure the board's autonomy and make clear their role in the governance of the Act and its regulations.

  • The Commission must also adopt guidance on how to comply with the Act's requirements.
  • The updated text mandates that the Commission delegates at least one testing facility to provide technical support for enforcement.
  • However, in line with the final text's aim of making a legal framework that is also innovation-friendly, regulatory sandboxes can also occur in the real world and without supervision in some cases (reflected in Articles 54a and 54b).
  • Penalties are also given proportionate caps regarding administrative fines, specifically for small to medium enterprises and start-ups.
  • According to Article 71(6), penalties are lowered substantially to 3%, 2% or 1%, according to the type and scope of the company.

The promotion of real-world regulatory sandboxes and the introduction of penalty caps demonstrates an implicit encouragement of the development of AI without completely burdening SMEs and start-ups with the fear of regulatory inflexibility. Instead, the changes show a more profound commitment to fostering innovation to an extent.

2.4 Prohibition

The prohibition of social scoring was extended to private actors instead of just public actors. This is to ensure the stipulation is not circumvented by public actors who could contract out to private actors to carry out social scoring on behalf of the agencies.

2.5 Designating high-risk systems

As per Annex III of the Act, high-risk systems are subject to stricter legal obligations. The updated text includes three major changes in the designation of high-risk systems.

  1. Now to be classified as high risk, the Czech Presidency has added that “the system should have a decisive weight in the decision-making process and not be purely accessory.”
  2. The parameters of what constitutes ‘purely accessory’ will be decided by the commission through the implementing act.
  3. From the list of high-risk systems, the following were removed:
  • Deepfake detection by law enforcement
  • Crime analytics
  • Verification of the authenticity of travel documents

From the list of high-risk systems, the following were added:

  • Critical digital infrastructure
  • Life insurance
  • Health insurance
  1. Upon implementation, the commission can both add and remove (under certain conditions) high risk use cases to the annex.

The compromise proposal also includes several changes that increase transparency concerning the use of high-risk AI systems.

  • Article 51 has been updated, where the obligation for high-risk providers to register on the EU database for high-risk AI systems has been extended to public body users (such as public authorities or agencies) except for law enforcement.
  • The newly added Article 52(2a) emphasizes an obligation for users of an emotion recognition system to inform natural persons when exposed to such a system.  

The decision to remove deepfake detection by law enforcement and crime analytics from the list of high-risk systems and make clear that the scope of the act does not include extending to AI for national security, defence, and military purposes indicate a precarious balance that is aiming to be maintained between the protection of human rights, innovation and national security.

However, other changes, such as the prohibition of social scoring by private actors and requiring public body users of high-risk systems to register on the EU database, can indicate that public agencies and their potential to evade specific stipulations are being kept under watch. It also speaks more broadly to the Commission’s dedication to regulating AI in private and public spheres.

2.6 Compliance feasibility for high-risk systems

In understanding the complex value-chains in which AI is developed and deployed, the final text clarifies and adjusts the requirements for high-risk systems.

This includes clarifications on the following:

  • The quality of data.
  • The technical documentation that small to medium enterprises need to have to demonstrate their high-risk systems are complying
  • Where the responsibilities lie and with who at various stages in the AI lifecycle value chain
2.6 I) Clarification on the interplay of the AI Act with other EU laws
2.6 II) Clarification on conformity assessments

The text also includes clarifications and simplifications for required conformity assessments. The Conformity Assessment is a legal obligation designed to foster accountability under the proposed EU AI Act that only applies to AI systems classified as ‘high-risk’. According to its definition, a conformity assessment refers to the process of checking that the requirements set out in Title III, Chapter 2 have been fulfilled, where Title III contains provisions that only apply to high-risk systems. A third-party conformity assessment is required only for AI systems intended to be used for the remote biometric identification of people to the extent that they are not prohibited.

However, third-party conformity assessments bring their own advantages, and they should not be limited only to a very limited group of AI systems:

  1. Third-party conformity assessments are better trusted as they may protect independence from the manufacturers, distributors and importers.
  2. Third-party conformity assessments may help especially the smaller scale business (SMEs) to achieve a high level of safety and fairness in AI systems by outsourcing this complex procedure.
  3. Encouraging quality control through third-party parties may also help the EU to develop its AI regulation strategy through cooperation and communication with the regulators.

Other EU legal frameworks such as the Digital Markets and Digital Services Act, mandate third-party audits and assessments to verify compliance with requirements.

3. What happens next?

As the member states have adopted the general approach, the European Council will now enter negotiations with the European Parliament. Parliament will adopt their own position and it is expected that an agreement will be reached. The EU AI Act is then set to pass by early 2024.

4. Getting ready for the EU AI Act

Despite currently not being in force, the EU AI Act and developments thereafter will shape the industry. Taking steps to manage the risks of your AI systems is the best way to get ahead of this upcoming regulation and can help you to embrace AI with greater confidence. Reach out to find out more about how Holistic AI’s software platform and team of experts can help you manage the risks of your AI.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo