Get the Latest on AI Regulation: Download the 2024 eBook Today!
Download
Learn more about EU AI Act

EU AI Act Now in Effect: A Practical Guide for Global Enterprises

Authored by
Published on
Jul 31, 2024
read time
0
min read
share this
EU AI Act Now in Effect: A Practical Guide for Global Enterprises

The EU AI Act marks a pivotal moment in the regulation of artificial intelligence, setting the stage for comprehensive oversight of AI technologies. As the first of its kind globally, this legislation seeks to ensure the safe, transparent, and ethical deployment of AI, with an impact reaching far beyond the European Union's borders. Effective from August 1, 2024, the Act introduces a phased enforcement schedule, making it crucial for global enterprises, especially those operating in or with the EU, to understand and comply with its provisions.

This guide aims to provide practical steps for EU AI Act readiness assessment, helping businesses navigate the complexities of this new regulatory landscape. It is designed specifically for C-suite executives, with a focus on CISOs (Chief Information Security Officers) and business leaders responsible for strategic decision-making. By outlining the key provisions, compliance strategies, and potential pitfalls, this guide equips enterprises with the knowledge to align their AI operations with regulatory requirements.

Key Objectives:

  • Decoding the EU AI Act: Providing a general overview to clarify the regulation.
  • Impact on Global Businesses: Highlighting the extraterritorial reach and implications for non-EU companies.
  • Navigating Compliance: Providing a step-by-step approach to meet regulatory standards.
  • Opportunities and Challenges: Exploring the potential for innovation within the regulatory framework.

As the EU AI Act is poised to become a global standard, this guide serves as an essential resource for aligning corporate strategies with emerging regulations.

Understanding the EU AI Act and its timeline

The EU AI Act is a comprehensive legislative framework regulating the development and use of AI within the European Union. Initially proposed by the European Commission in April 2021, it was published in the Official Journal of the EU on July 12, 2024. Key dates for implementation include:

  • August 1, 2024: Act comes into force.
  • February 2, 2025: Start of application for general provisions and prohibited practices.
  • May 2, 2025: Deadline for publishing codes of practice.
  • August 2, 2025:  
    • Enforcement of rules concerning notified bodies, general-purpose AI models, governance structure, and penalties.
    • Deadline for the Commission to develop a guidance on reporting serious incidents.
  • February 2, 2026:
    • Deadline for the publication of the guidance for the classification of high-risk systems by the Commission.
    • Deadline for the Commission to implement an act detailing the post-market monitoring plan and its content.
  • August 2, 2026:
    • General application date for the Act.
    • Deadline for national competent authorities to make at least one regulatory sandbox operational at national level.
  • August 2, 2027: Deadline for getting in compliance with the relevant GPAI model obligations for GPAI models that have been placed on the market before August 2, 2025.
  • December 31, 2030: Deadline for getting in compliance with the Act for AI systems which are components of the large-scale IT systems related to certain EU legislations and that have been placed on the market before August 2, 2027.
Understanding the EU AI Act timeline

Core Objectives:

The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.

Scope and Applicability: Separation of Systems and Models

The EU AI Act's regulations apply to both AI systems and general-purpose AI (GPAI) models, each with distinct considerations. The Act's extraterritorial scope ensures that any company, regardless of its location, is subject to the regulations if they offer AI systems in the EU market or their systems impact EU citizens.

AI Systems

The EU AI Act categorizes AI systems based on the risk they pose, with varying regulatory requirements:

Category Key Focus Examples
Prohibited Practices Ban on harmful practices Social scoring, real-time biometric ID for law enforcement
High-Risk AI Systems Risk assessments, data governance, transparency Healthcare diagnostics, employment decision-making, facial recognition
Limited Risk Systems Transparency and accountability measures AI systems interacting with humans, biometric data processing
Minimal Risk Systems Encouraged to follow voluntary codes of conduct Simple chatbots, basic automation systems

AI Systems: Risk-Based Approach

AI Models

Special obligations apply to GPAI models, focusing on their broad applicability and potential systemic risks:

Type Obligations Notes
General-Purpose AI Models (GPAI) Provide technical documentation, ensure compliance with transparency requirements GPAI models used in multiple domains, e.g., natural language processing tools
GPAI Models with Systemic Risk Additional obligations, such as rigorous testing and reporting on systemic risks High-impact GPAI models with potential widespread effects, stringent regulatory scrutiny required

Categorization helps in delineating the specific compliance measures needed, ensuring that all AI applications, whether specialized or general, adhere to the EU's high standards for safety and ethics.

This comprehensive approach not only safeguards citizens but also provides a structured pathway for global enterprises to develop and deploy AI technologies ethically and responsibly.

Key Provisions and Implications

The EU AI Act outlines essential rules and standards for the development and use of AI systems, categorizing them by risk and detailing the necessary compliance measures. These provisions are fundamental in ensuring the safe and ethical deployment of AI technologies.

Risk-Based Classification

The EU AI Act introduces a Risk-Based Classification system, categorizing AI systems into four levels based on the risk they pose. This determines the specific regulatory requirements for each category:

Risk Level Description Requirements
Unacceptable Risk AI practices posing a serious threat Prohibited (e.g., AI used for real-time biometric identification by law enforcement)
High Risk Significant impact AI systems Rigorous risk assessments, data governance, and transparency requirements
Limited Risk AI systems and models with moderate impact User notification, transparency, and limited accountability measures
Minimal Risk Low-impact AI systems and models Basic transparency requirements, minimal regulatory oversight

General-Purpose AI (GPAI) Models

General-Purpose AI models are versatile AI technologies capable of performing a wide range of tasks across various domains. Under the EU AI Act, GPAI models must adhere to specific requirements, particularly when they are incorporated into systems used in high-risk contexts.

These classifications help enterprises identify the necessary compliance measures for their AI systems, ensuring alignment with the EU AI Act’s stringent standards. Specifically, the Act requires:

  • Model Evaluation: Use standardized protocols and tools for evaluation, including adversarial testing.
  • Incident Tracking and Reporting: Monitor, document, and report significant incidents and corrective actions to the AI Office and national authorities.
  • Cybersecurity Measures: Ensure robust cybersecurity protections for the GPAISR and its infrastructure.

There are more stringent obligations in addition to these for the providers of GPAI models with systemic risk. For more, refer our detailed blog around GPAI models and obligations.

How to check if my system is at risk under the EU AI Act?

Our EU AI Act risk assessment tool helps businesses identify and assess the compliance status of their AI systems, ensuring alignment with regulatory requirements.

Penalties and Fines under the EU AI Act

The EU AI Act enforces a structured penalty system, imposing fines based on the severity and nature of violations. The penalties are designed to deter non-compliance and ensure the responsible use of AI technologies.

  • Severe Violations: Up to €35 million or 7% of global turnover.
  • Other Violations: Up to €15 million or 3% of global turnover.
  • Incorrect Information: Up to €7.5 million or 1% of global turnover.
  • SMEs Consideration: Fines are capped at the lower of specified percentages or amounts for SMEs to prevent disproportionate burdens.
  • GPAI Model-Related Violations: As per Article 101.1, providers of general-purpose AI models may face fines up to 3% of their annual total worldwide turnover or €15 million, whichever is higher, for specific infractions, such as:
    • Infringement of relevant provisions of the Regulation.
    • Failure to comply with requests for documents or information under Article 91, or providing incorrect, incomplete, or misleading information.
    • Non-compliance with measures requested under Article 93.
    • Failure to provide access to GPAI models or GPAI models with systemic risk for evaluation as required under Article 92.
Violations and Corresponding Penalties

Impact of Non-Compliance

Non-compliance with the EU AI Act can lead to significant financial penalties, operational disruptions, and reputational damage, especially for large corporations. For SMEs, these consequences underscore the need for thorough compliance to safeguard business integrity.

Preparing for Compliance – Practical Guide

Organizations need to take crucial steps to align with the EU AI Act, including assessing current AI systems, establishing robust governance frameworks, and identifying potential compliance gaps and risks.

Immediate Actions for Businesses

  • Inventory and Classification: Led by the CTO (Chief Technology Officer), organizations should assess and categorize all existing AI systems to understand their scope and compliance requirements.
  • Governance Frameworks: The CIO (Chief Information Officer) should implement or update AI governance structures, ensuring alignment with the latest regulatory standards and best practices.
  • Gap Analyses and Risk Assessments: The CSO (Chief Security Officer) is responsible for conducting thorough gap analyses and risk assessments to identify areas where the organization may fall short of compliance requirements and to assess the potential risks associated with AI systems.

Leveraging Technology and Expertise

  • Invest in Compliance Tools: The CISO (Chief Information Security Officer) should advocate for the adoption of advanced compliance tools and software to streamline the monitoring and management of AI systems, ensuring they meet the EU AI Act's requirements.
  • Expert Consultation: Collaboration with external legal and AI experts is crucial. The Chief Legal Officer (CLO) can coordinate consultations and audits with specialists to ensure a comprehensive understanding of the regulatory landscape and to facilitate continuous improvement in compliance practices.

Operational and Legal Readiness

  • Training and Certification: The HR Director should create training programs to educate staff on AI governance and compliance and offer certification to boost team expertise.
  • Communication and Transparency: The Chief Communications Officer (CCO) should craft strategies for clear communication about compliance and AI policies with stakeholders, ensuring transparency.
  • Futureproofing: The Strategy Officer should track legislative changes and global AI regulations like EU AI Act to keep the organization compliant and up to date.

By engaging these key roles, businesses can establish a comprehensive and proactive approach to compliance with the EU AI Act. This multi-faceted strategy not only mitigates risks but also positions the organization as a leader in responsible and ethical AI innovation.

Global Implications and Strategic Considerations

The EU AI Act's influence extends beyond Europe, setting a global precedent for AI regulation. Compliance offers strategic benefits, including building trust and mitigating risks.

Impact Beyond the EU

The EU AI Act is poised to significantly influence global AI regulations, serving as a model for international standards through the "Brussels effect." This phenomenon occurs when EU regulations set a precedent that other countries follow, shaping global regulatory landscapes. As the EU establishes stringent guidelines for AI governance, it is likely that other jurisdictions will align their regulations with these standards, impacting multinational companies operating in multiple regions.

Strategic Advantage of Compliance

Complying with the EU AI Act offers key strategic benefits:

  • Building Trust: Organizations that adhere to the Act's regulations can build stronger trust with customers and stakeholders by demonstrating a commitment to ethical AI practices and transparency.
  • Mitigating Risks: Proactive compliance helps organizations avoid potential legal and reputational risks associated with non-compliance, ensuring that they are well-prepared for the evolving regulatory environment. This not only safeguards the organization but also enhances its market position by setting a high standard for responsible AI usage.

My Company is Based in the U.S. Why Should I Care About the EU AI Act?

The EU AI Act affects U.S. companies that operate in or provide services to the EU, even without a physical presence. You need to comply if:

  1. Direct Operations: Your company offers services or products, including through digital platforms, within the EU.
  2. Supply Chain and Partnerships: Your AI technology is integrated into products or services sold by EU companies.
  3. AI System's Use in the EU: The EU AI Act applies to AI systems if their output is used in the EU, regardless of where the provider or deployer is located.

Additionally, the EU AI Act sets a precedent influencing global AI regulations. U.S. policymakers may align future regulations with its standards, affecting companies indirectly.

As part of their commitment to responsible AI, Unilever has implemented a thorough review process for new AI projects. This process ensures that all AI initiatives align with the stringent requirements of the EU AI Act.

"To ensure regulatory compliance, potential new projects using AI systems at Unilever are assessed by a cross-functional team of subject matter experts, including our partners at Holistic AI. They review the needs of the project, manage risks, and suggest improvements or mitigation strategies that might be needed prior to deployment, as well as any ongoing monitoring."

source - The EU AI Act has arrived: how Unilever is preparing

This collaborative approach allows Unilever to proactively address potential risks and continuously monitor AI deployments, ensuring they meet both legal and ethical standards. By partnering with Holistic AI, Unilever demonstrates its dedication to maintaining high standards of AI governance and compliance.

How Holistic AI can help

Navigate the complexities of the EU AI Act with Holistic AI's comprehensive AI governance platform. Our all-in-one command center offers complete oversight of your AI systems, helping you optimize usage, prevent risks, and adapt to the evolving regulatory landscape. This strategic approach not only maximizes your AI investment but also enhances the efficiency of AI development through increased oversight and operationalized governance.

Schedule a demo today to discover how Holistic AI can support your company's adaptability to the EU AI Act and safeguard your operational future.

Conclusion

Proactive compliance with the EU AI Act is crucial. Rather than seeing it as a regulatory burden, it should be viewed as an opportunity for ethical AI innovation. Partner with Holistic AI to ensure your business is prepared and compliant, leveraging our comprehensive governance platform for seamless adherence to the EU AI Act.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Track AI Regulations in Real-time

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Get a demo