Key Takeaways
The European Commission aims to lead the world in Artificial Intelligence (AI) regulation with its proposal for Harmonised Rules on Artificial Intelligence (known as the EU AI Act). It seeks to lay out a normative framework so that risks of AI systems are managed and mitigated to ensure building trust in AI systems in the EU. The Regulation proposes a risk-based classification for AI systems, which defines four levels of risk: minimal, limited, high, and unacceptable.
The Proposal for the EU AI Act of April 2021 has undergone an extended consultation process and has had different amendments since then, such as the Parliament’s reports and texts of the French and Czech presidencies. For example, prior drafts of the proposal prioritise the fundamental rights of individuals and risks against these rights, however, the Czech Presidency has aimed to strike a balance between fundamental rights and the promotion of AI.
The latest and final compromise text of the EU AI Act (adopted on 6 December 2022) marks the EU ministers’ official greenlight of adopting a general approach to the AI Act. The following table summarises and briefly explains updates from the Czech Presidency’s final compromise text:
Where the original definition of AI in the Act was conceived to futureproof, the final text has changed the definition of AI to be narrower as member states were concerned that a broader definition would include software too generally. The new definition, “systems developed through machine learning, logic- and knowledge-based approaches,” can be amended by the Commission in later delegate acts. This is an example of an update to the text that indicates a step forward towards AI innovation within the framework.
⦿ The original scope of the AI Act was intended only to cover objective-based systems, where general AI (language models that can do various tasks) was omitted.
⦿ The final text clarifies that national security, defence, and military purposes are excluded from the scope of the Act.
⦿ The scope of the term “vulnerability” in the Act, was also extended to include socio-economic vulnerabilities.
Emphasizing the role of human oversight in the governance of the artificial intelligence Act, provisions relating to the AI Board have been modified. This is to ensure the board's autonomy and make clear their role in the governance of the Act and its regulations.
The promotion of real-world regulatory sandboxes and the introduction of penalty caps demonstrates an implicit encouragement of the development of AI without completely burdening SMEs and start-ups with the fear of regulatory inflexibility. Instead, the changes show a more profound commitment to fostering innovation to an extent.
The prohibition of social scoring was extended to private actors instead of just public actors. This is to ensure the stipulation is not circumvented by public actors who could contract out to private actors to carry out social scoring on behalf of the agencies.
As per Annex III of the Act, high-risk systems are subject to stricter legal obligations. The updated text includes three major changes in the designation of high-risk systems.
From the list of high-risk systems, the following were added:
The compromise proposal also includes several changes that increase transparency concerning the use of high-risk AI systems.
The decision to remove deepfake detection by law enforcement and crime analytics from the list of high-risk systems and make clear that the scope of the act does not include extending to AI for national security, defence, and military purposes indicate a precarious balance that is aiming to be maintained between the protection of human rights, innovation and national security.
However, other changes, such as the prohibition of social scoring by private actors and requiring public body users of high-risk systems to register on the EU database, can indicate that public agencies and their potential to evade specific stipulations are being kept under watch. It also speaks more broadly to the Commission’s dedication to regulating AI in private and public spheres.
In understanding the complex value-chains in which AI is developed and deployed, the final text clarifies and adjusts the requirements for high-risk systems.
This includes clarifications on the following:
The text also includes clarifications and simplifications for required conformity assessments. The Conformity Assessment is a legal obligation designed to foster accountability under the proposed EU AI Act that only applies to AI systems classified as ‘high-risk’. According to its definition, a conformity assessment refers to the process of checking that the requirements set out in Title III, Chapter 2 have been fulfilled, where Title III contains provisions that only apply to high-risk systems. A third-party conformity assessment is required only for AI systems intended to be used for the remote biometric identification of people to the extent that they are not prohibited.
However, third-party conformity assessments bring their own advantages, and they should not be limited only to a very limited group of AI systems:
Other EU legal frameworks such as the Digital Markets and Digital Services Act, mandate third-party audits and assessments to verify compliance with requirements.
As the member states have adopted the general approach, the European Council will now enter negotiations with the European Parliament. Parliament will adopt their own position and it is expected that an agreement will be reached. The EU AI Act is then set to pass by early 2024.
Despite currently not being in force, the EU AI Act and developments thereafter will shape the industry. Taking steps to manage the risks of your AI systems is the best way to get ahead of this upcoming regulation and can help you to embrace AI with greater confidence. Reach out to find out more about how Holistic AI’s software platform and team of experts can help you manage the risks of your AI.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts