Key takeaways
The EU AI Act is the EU’s proposed law to regulate the development and use of ‘high-risk’ AI systems, including those used in HR, banking and education.
The EU AI Act was first proposed by the European Commission in April 2021. It will be the first law worldwide which regulates the development and use of AI in a comprehensive way.
The AI Act is set to be the “GDPR for AI”, with hefty penalties for non-compliance, extra-territorial scope, and a broad set of mandatory requirements for organisations which develop and deploy AI.
Any enterprise operating in or selling into Europe should be aware of the wide-ranging implications of the Act and take steps to ensure readiness with its provisions.
Adopting the AI Act remains a top priority for the EU institutions.
The EU plans to adopt the AI Act within the next year. The absolute deadline is February 2024, due to the European Parliament elections and appointment of the new European Commission in May 2024.
The next few months will be dominated by the European Parliament (MEPs) and Council of Ministers (EU member state governments) negotiating their respective positions.
The European Commission has requested that the European standards organisations (CEN / CENELEC) develop technical standards for AI Act compliance in parallel with the legislative process. The Commission has requested that the standards are completed by the time the Act is adopted, at latest, in 2024. This is unusual, as standards are usually developed after a law is adopted. It demonstrates the urgency with which the Commission is treating the AI Act.
Before the AI Act can be adopted, the EU institutions need to reach agreement on the final text. The outstanding issues dominating the process are:
Some issues are not proving contentious, meaning it is high likely that the adopted AI Act will reflect these positions:
○ Hefty fines for non-compliance: Up to €30m or 6% of global annual turnover
(whichever is higher), for the most serious offences.
○ The Act is extra-territorial: It will apply to any company worldwide, if they are selling into or using their AI system in the EU.
○ Grace period: Once the Act is adopted, businesses will likely have a grace period before any enforcement action starts.
○ There is widespread agreement regarding the Commission’s proposals
for the mandatory requirements for providers and users of high-risk AI systems,
including risk management frameworks, conformity assessments,
quality assurance testing and technical documentation/ record keeping.
Enterprises will face a multitude of compliance obligations when the AI Act is adopted.
Conformity assessments and technical standards will be key for enterprises
Organisations will have to establish AI risk management frameworks and undertake conformity assessments to demonstrate that their AI systems are compliant with the Act. This is estimated to result in compliance costs of between €200,000-€330,000 per company.
The technical standards developed by CEN / CENELEC are voluntary, but organisations who follow and adopt them will benefit from a presumption of conformity with the AI Act (in the relevant area).
On 28 September, the European Commission published proposals for an AI Liability Directive.
The AI Act and the AI Liability Directive are two sides of the same coin.
The AI Act is designed to prevent harm caused by AI, whereas the AI Liability Directive is designed to ensure that victims are fairly compensated if harm occurs.
The Directive will make it easier for individuals to claim damages against companies for harm caused by their AI system.
By exposing enterprises to the possibility of being held liable and having to pay compensation for harm caused by their AI systems -- and directly linking non-compliance with AI Act to liability for AI-induced harm -- the Directive incentivises compliance with the AI Act.
It will not be long before enterprises that develop and deploy AI systems will be obliged to comply with the AI Act’s broad set of requirements.
Forward thinking enterprises should therefore act now to establish AI risk management frameworks, to minimise the legal, reputational and commercial damage which falling foul of the AI Act could result in.
Just like the GDPR did for privacy, the AI Act will shine a spotlight on AI risk, significantly increasing awareness of the importance of responsible AI among businesses, regulators and the wider public.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts