Landmark, seismic, pivotal – whichever adjective you prefer to use, there’s no doubt the AI Act is a game-changer. It is the first proposed set of rules to take a comprehensive approach to AI regulation in one of the world's biggest markets, the European Union.
The Act has been commended for its joined-up thinking, its risk-based approach to AI regulation, and its emphasis on safety and transparency. However, when the legislation takes effect, businesses will be operating in an entirely new paradigm when it comes to developing and deploying AI systems.
Here, we’ll explore the EU AI Act and explain how businesses can comprehensively prepare for the EU AI Act. With penalties for non-compliance of up to €40 million or 7% turnover, organisations can scarcely afford to be slow to react.
The bill has been subject to bureaucratic to-and-fro since it was first proposed in April 2021, but recent developments signify that the legislative process is now entering its end game. On 14 June 2023, the European Parliament voted to move forward with the EU AI Act, with the bill garnering overwhelming support.
The next stage, the last final hurdle in the legislative process, will see the Act progress to the Trilogues stage, where the European Commission, European Parliament, and Council of the EU will informally convene to arrive at the final version of the legislation.
This timeline means the AI Act proposal will likely be approved by the end of 2023. A two-year implementation period is expected thereafter, meaning the Act will probably be enforced by 2026. Businesses within the Act’s scope therefore have, at the time of writing, around two-and-a-half years to fully prepare for the EU AI Act. It is essential not to let the preparatory period go to waste.
The first step to preparing for the Act is establishing a thorough understanding of the proposal.
The text of the Act itself is long – 394 pages to be precise – and most enterprises will not have time to forensically examine the legislation, but there are some core points that are essential to consider.
Significantly, the Act grades AI systems according to four levels of risk – minimal, limited, high, and unacceptable. A system’s obligations under the terms of the Act correspond with its risk classification. For example, a spam email filter deemed to have a low level of risk has no associated obligations, while systems with limited risk have transparency obligations where users must be made aware that they are interacting with an AI system.
At the top end of the spectrum, high-risk applications such as systems deployed in critical aspects of the healthcare or welfare sectors, have the most stringent obligations. Systems classified as having an unacceptable level of risk meanwhile are banned altogether. These include real-time biometric identification systems, as well as those that use subliminal techniques.
For a detailed analysis of the risk-based systems and the obligations across the various classifications, read our paper.
The following entities are covered within the EU AI Act and must prepare for the legislation:
There are some exemptions, such as pre-market AI research and testing, international public authorities, military AI systems, and most free/open-source AI components.
The emergence of comprehensive legislation of this magnitude is a rare occurrence. The EU AI Act has garnered comparisons with the EU's 2016 GDPR Act, which compelled companies to completely reevaluate their data handling practices, leading to substantial operational and procedural changes.
Given its unprecedented and complex nature, preparing for the EU AI Act will be a similarly significant undertaking. However, there are several proactive measures that can be taken to ensure readiness.
A practical starting point is to create a comprehensive inventory of your organisation's AI systems, whether developed or deployed within the EU or elsewhere. It is key to outline their intended purpose and capabilities in as much detail as possible. This catalogue will provide a clear overview of your AI infrastructure and serve as a foundation for compliance efforts.
Furthermore, organisations should establish explicit, transparent governance procedures and guidelines for AI applications that adhere to the Act's provisions. These measures will ensure that your organisation operates within the prescribed boundaries while fostering accountability and transparency in AI usage.
Developing an internal culture that understands the Act and actively adheres to its provisions is crucial too. This requires ongoing education and awareness programs to familiarize employees with the Act's requirements and implications. Additionally, investing in expertise and talent acquisition in the field of AI compliance will be vital for maintaining regulatory adherence.
Moreover, it is essential for organisations to invest in the necessary technologies and infrastructure to meet the Act's provisions. This includes identifying and implementing appropriate tools and systems that facilitate compliance monitoring, data protection, and other essential requirements outlined by the legislation.
By proactively undertaking these steps, organisations can enhance their preparedness for the EU AI Act, ensuring a smoother transition and mitigating potential compliance challenges.
The EU AI Act enforcement date may appear distant, but implementing the necessary systems and processes to achieve compliance can take time. With the Act on the horizon, it is never too early to begin preparing.
Holistic AI offers a comprehensive solution for AI risk management, covering the entire lifecycle of AI systems to minimise risks from design to deployment. With a strong commitment to helping organisations achieve EU AI Act compliance, our platform provides benefits such as:
Schedule a call to learn how Holistic AI can assist your organisation prepare for EU AI Act compliance.
The intention of the AI Act proposal is to ensure that AI systems are safe and transparent, with an emphasis on the fundamental rights of EU citizens.
The latest version of the AI act, which is expected to be the formulation that will be passed into law, could see companies fined up to €40 million or 7% of global turnover, whichever is higher.
The legislation takes a risk-based approach to regulating AI, with systems subject to varying levels of obligations according to their risk classifications. High-risk applications have the most stringent obligations, while systems deemed to have an unacceptable level of risk are banned altogether.
The Act is not yet enforced, but it is expected to have been passed into law by the end of 2023. There will then likely be a two-year period before the Act's enforcement date, by which time all organisations will be expected to be compliant.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts