🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

The EU AI Act and its Potential Impact on Enterprises Harnessing the Power of AI

Authored by
Osman Gazi Güçlütürk
Legal & Regulatory Lead in Public Policy at Holistic AI
Published on
Oct 23, 2023
read time
0
min read
share this
The EU AI Act and its Potential Impact on Enterprises Harnessing the Power of AI

Artificial intelligence (AI) systems have been widely adopted across sectors as well as applications. Accordingly, regulatory frameworks aimed at addressing and governing risks associated with these systems have followed the same trends.

The most prominent and comprehensive example of such a framework is the European Union’s (EU) ongoing work on a regulation harmonising rules applicable to AI systems throughout its 27 member states, also known as the EU AI Act.

Although the EU AI Act’s final details are currently being debated by European lawmakers ahead of an expected implementation date by the end of 2023, the Act is set to affect the whole AI ecosystem. Consequently, this has sparked a debate as well as an incentive for early commitment to its principles in the industry.

This blog post gives an overview of the impact that the EU AI Act is set to have on enterprises using AI and how to prepare for compliance.

1. Who is within the scope of the EU AI Act?

The EU AI Act regulates AI systems and outlines requirements. Primarily, it is the providers of AI systems who are obliged to comply with the requirements. For the purposes of the EU AI Act, a provider is defined as: “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge”.

Providers are not the only entities covered by the EU AI Act. All versions of the EU AI Act impose obligations on importers and distributors of AI systems as well. Additionally, albeit using a different language (either as “user” or “deployer”), all versions also impose obligations on the person using the AI system or who authorised such use.

2. What is considered AI under the EU AI Act?

While the definition of AI under the AI Act has been one of the most debated topics in the drafting of the Act (and how the EU AI Act will define the AI system in the final draft is not yet certain), there are a number of common elements between the proposed definitions, such as the techniques used, the functioning with a certain level of autonomy, and the influence to the environment the system interacts with.

The definition adopted in the latest European text, which has been adopted by the European Parliament as the negotiating position, defines AI systems as: “a system designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations, or decisions, influencing the environments with which the AI system interacts”.

Given the widespread applications of AI and the complexity of defining the term, it is likely that some of these concepts will be open to interpretation. As such, judgements about whether a tool is within the scope of the Act should be made by someone with legal or regulatory expertise. However, this may see enterprises, particularly SMEs incur significant costs depending on the variety and scale of their AI systems, meaning that determining whether compliance is required can impose a financial burden before steps are even taken towards compliance. On the other hand, penalties for non-compliance are up to €7 million, meaning that determining status as a covered entity is vital to avoid even larger financial costs.

3. How are AI systems classified under the EU AI Act?

The EU AI Act adopts a risk-based approach to regulating systems that are in scope, meaning that it does not prohibit or restrict all AI systems but sets forth rules for AI systems depending on their risk classification. There are three main risk levels under the EU AI Act:

  1. AI systems with unacceptable risk (aka prohibited AI systems, Article 5),
  1. High-risk AI systems (Article 6),
  1. Low-risk (or minimal-risk) AI systems

AI systems with unacceptable risk, such as AI systems used for biometric identification or social scoring, are prohibited. These systems are not subject to a prior evaluation of risk assessment but are instead considered inherently risky due to the nature of their use. These systems will be illegal to make available in the EU under the AI Act.

On the other hand, high-risk AI systems are subject to a set of requirements (Articles 8-15).  This applies to systems designed to be applied to specific use-cases, such as those used in education and vocational training, employment decisions, and law enforcement. Finally, systems that do not fall into either of these categories are governed only via voluntary codes of conduct scheme (Article 69).

There is another group of AI systems that pose some form of limited transparency risk and are therefore required to take some additional measures (Article 52). Commonly, and interestingly, these systems are depicted between the high-risk and low-risk AI systems. This includes systems such as emotion recognition systems or AI systems generating deep fakes. It should be emphasised that while prohibited, high-risk, and low-risk are mutually exclusive categories, meaning that an AI system can fall under only one of these, requirements under Article 52 can be applicable to both high-risk and low-risk AI systems.

In addition to these risk-based classifications, new groups of AI systems have been introduced with the drafts of the Council and the Parliament, which are not present in the Commission’s initial proposal. Under these texts, foundational models and general-purpose AI systems are subject to a different set of requirements.

It should be noted that the list of AI systems falling under these classifications is not finalised yet and is different in drafts prepared by the three major European Institutions: the European Commission, the Council of the EU, and the European Parliament.

Monitoring the developments on this classification alone will be vital for enterprises using AI systems to ensure that appropriate compliance steps are taken in the event that their system is considered prohibited or high-risk by the EU AI Act.

4. What are the requirements for covered entities under the EU AI Act?

The requirements for AI systems depend firstly on their risk classification and secondly on their function:

i. Requirements based on risk-based classification:

  1. Any prohibited AI systems must be stopped unless an exception provided under Article 5 applies.
  1. Requirements between Articles 9-15 must be complied with for high-risk AI systems.
  1. If one of the cases set forth under Article 52 is present, the respective measures must be implemented.
  1. For low-risk (or minimal-risk) AI systems, the availability and feasibility of voluntary codes of conduct must be explored.

ii. Requirements based on function and nature:

(These are not present in the Commission’s initial proposal.)

  1. For foundation models, requirements under Article 28b of the European Parliament’s position must be taken into account.
  1. For general-purpose AI systems, requirements under Articles 4a and 4b of the Council of EU’s general approach must be considered.

5. Requirements for high-risk systems under the AI Act

Among these requirements, the most comprehensive list and framework belongs to high-risk AI systems. Requirements for high-risk AI systems are provided between Articles 8 and 15. Article 8 of the EU AI Act stipulates that high-risk AI systems should comply with the requirements provided in the following articles:

  1. Establishing, implementing, documenting, and maintaining a risk management system (Article 9).
  1. Maintaining data governance and quality requirements (Article 10).
  1. Drawing up a detailed technical documentation (Article 11).
  1. Keeping records and logs of the operations of their AI systems (Article 12).
  1. Designing and developing their AI systems in a transparent manner from the perspective of prospective users of the system and providing instructions (Article 13).
  1. Providing appropriate human-machine interface tools and human oversight (Article 14).
  1. Designing and developing their AI systems with appropriate levels of accuracy, robustness, and cybersecurity (Article 15).

These requirements are principle-based, meaning that the precise manner in which they must be complied with is not prescribed under the EU AI Act and may vary depending on the technical features of the AI system in question. Non-compliance with these requirements is sanctioned by penalties and significant administrative fines under Article 71 of the EU Act.

As such, enterprises that use or deploy AI systems covered by the EU AI Act will be required to invest significant resources into compliance such as financial costs, legal expertise, and establishing and implementing procedures.

6. Towards finalisation of the EU AI Act

Despite the fact that the EU AI Act is not a binding law, it affects the whole ecosystem as well as the international regulatory framework on AI. The Commission is determined to foster early commitment to the principles and requirements provided by the EU AI Act. Indeed, to this end, it entered into collaboration with major AI companies under the so-called “AI Pact”.

Simultaneously, the CEN/CENELEC is working on the development of harmonised standards for the EU AI Act in collaboration with other international standardisation organisations. These standards will not only be a cornerstone of the EU AI Act’s implementation but also will affect the international AI industry. In light of this, the EU AI Act is expected to be a gold standard for the AI industry. Hence, monitoring developments and getting prepared for the EU AI Act early grants enterprises a competitive advantage.

7. How should enterprises prepare for the EU AI Act?

It is clear that the EU AI Act is a complex piece of legislation that will require significant expertise to navigate, meaning that compliance cannot happen overnight. While the exact pathway each enterprise will need to follow within the regulatory framework will be unique, the journey will comprise some broad steps under the latest text provided by the European Parliament – take a look at the graphic below for a visualisation.

Steps under the latest text provided by the European Parliament

Holistic AI can help you identify and classify your AI systems, preparing you for the requirements of the EU AI Act and tracking international developments in AI regulation.

Schedule a call with a member of our specialist team to find out more.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo