Artificial intelligence (AI) systems have been widely adopted across sectors as well as applications. Accordingly, regulatory frameworks aimed at addressing and governing risks associated with these systems have followed the same trends.
The most prominent and comprehensive example of such a framework is the European Union’s (EU) ongoing work on a regulation harmonising rules applicable to AI systems throughout its 27 member states, also known as the EU AI Act.
Although the EU AI Act’s final details are currently being debated by European lawmakers ahead of an expected implementation date by the end of 2023, the Act is set to affect the whole AI ecosystem. Consequently, this has sparked a debate as well as an incentive for early commitment to its principles in the industry.
This blog post gives an overview of the impact that the EU AI Act is set to have on enterprises using AI and how to prepare for compliance.
The EU AI Act regulates AI systems and outlines requirements. Primarily, it is the providers of AI systems who are obliged to comply with the requirements. For the purposes of the EU AI Act, a provider is defined as: “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge”.
Providers are not the only entities covered by the EU AI Act. All versions of the EU AI Act impose obligations on importers and distributors of AI systems as well. Additionally, albeit using a different language (either as “user” or “deployer”), all versions also impose obligations on the person using the AI system or who authorised such use.
While the definition of AI under the AI Act has been one of the most debated topics in the drafting of the Act (and how the EU AI Act will define the AI system in the final draft is not yet certain), there are a number of common elements between the proposed definitions, such as the techniques used, the functioning with a certain level of autonomy, and the influence to the environment the system interacts with.
The definition adopted in the latest European text, which has been adopted by the European Parliament as the negotiating position, defines AI systems as: “a system designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations, or decisions, influencing the environments with which the AI system interacts”.
Given the widespread applications of AI and the complexity of defining the term, it is likely that some of these concepts will be open to interpretation. As such, judgements about whether a tool is within the scope of the Act should be made by someone with legal or regulatory expertise. However, this may see enterprises, particularly SMEs incur significant costs depending on the variety and scale of their AI systems, meaning that determining whether compliance is required can impose a financial burden before steps are even taken towards compliance. On the other hand, penalties for non-compliance are up to €7 million, meaning that determining status as a covered entity is vital to avoid even larger financial costs.
The EU AI Act adopts a risk-based approach to regulating systems that are in scope, meaning that it does not prohibit or restrict all AI systems but sets forth rules for AI systems depending on their risk classification. There are three main risk levels under the EU AI Act:
AI systems with unacceptable risk, such as AI systems used for biometric identification or social scoring, are prohibited. These systems are not subject to a prior evaluation of risk assessment but are instead considered inherently risky due to the nature of their use. These systems will be illegal to make available in the EU under the AI Act.
On the other hand, high-risk AI systems are subject to a set of requirements (Articles 8-15). This applies to systems designed to be applied to specific use-cases, such as those used in education and vocational training, employment decisions, and law enforcement. Finally, systems that do not fall into either of these categories are governed only via voluntary codes of conduct scheme (Article 69).
There is another group of AI systems that pose some form of limited transparency risk and are therefore required to take some additional measures (Article 52). Commonly, and interestingly, these systems are depicted between the high-risk and low-risk AI systems. This includes systems such as emotion recognition systems or AI systems generating deep fakes. It should be emphasised that while prohibited, high-risk, and low-risk are mutually exclusive categories, meaning that an AI system can fall under only one of these, requirements under Article 52 can be applicable to both high-risk and low-risk AI systems.
In addition to these risk-based classifications, new groups of AI systems have been introduced with the drafts of the Council and the Parliament, which are not present in the Commission’s initial proposal. Under these texts, foundational models and general-purpose AI systems are subject to a different set of requirements.
It should be noted that the list of AI systems falling under these classifications is not finalised yet and is different in drafts prepared by the three major European Institutions: the European Commission, the Council of the EU, and the European Parliament.
Monitoring the developments on this classification alone will be vital for enterprises using AI systems to ensure that appropriate compliance steps are taken in the event that their system is considered prohibited or high-risk by the EU AI Act.
The requirements for AI systems depend firstly on their risk classification and secondly on their function:
(These are not present in the Commission’s initial proposal.)
Among these requirements, the most comprehensive list and framework belongs to high-risk AI systems. Requirements for high-risk AI systems are provided between Articles 8 and 15. Article 8 of the EU AI Act stipulates that high-risk AI systems should comply with the requirements provided in the following articles:
These requirements are principle-based, meaning that the precise manner in which they must be complied with is not prescribed under the EU AI Act and may vary depending on the technical features of the AI system in question. Non-compliance with these requirements is sanctioned by penalties and significant administrative fines under Article 71 of the EU Act.
As such, enterprises that use or deploy AI systems covered by the EU AI Act will be required to invest significant resources into compliance such as financial costs, legal expertise, and establishing and implementing procedures.
Despite the fact that the EU AI Act is not a binding law, it affects the whole ecosystem as well as the international regulatory framework on AI. The Commission is determined to foster early commitment to the principles and requirements provided by the EU AI Act. Indeed, to this end, it entered into collaboration with major AI companies under the so-called “AI Pact”.
Simultaneously, the CEN/CENELEC is working on the development of harmonised standards for the EU AI Act in collaboration with other international standardisation organisations. These standards will not only be a cornerstone of the EU AI Act’s implementation but also will affect the international AI industry. In light of this, the EU AI Act is expected to be a gold standard for the AI industry. Hence, monitoring developments and getting prepared for the EU AI Act early grants enterprises a competitive advantage.
It is clear that the EU AI Act is a complex piece of legislation that will require significant expertise to navigate, meaning that compliance cannot happen overnight. While the exact pathway each enterprise will need to follow within the regulatory framework will be unique, the journey will comprise some broad steps under the latest text provided by the European Parliament – take a look at the graphic below for a visualisation.
Holistic AI can help you identify and classify your AI systems, preparing you for the requirements of the EU AI Act and tracking international developments in AI regulation.
Schedule a call with a member of our specialist team to find out more.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts