From the Golden State to the heart of Europe, artificial intelligence (AI) regulation is growing significantly. In the last few years, increasing awareness of AI risks and harms has prompted governments to consider AI regulations, policies, and strategies to manage them. As a result, a growing consensus is emerging in favour of risk-based governance of AI, centred around assessing AI risks and enabling stakeholders to respond with practical and proportionate control measures.
In the EU, the proposed AI Act aims to position the region as the global leader in AI regulation that establishes the “gold standard” for protecting society and managing risks, following their dominant approach with the GDPR. Narrowed in focus, California has proposed amendments to its employment regulations to extend non-discriminatory practices to automated- decision systems (ADS) to address bias and discrimination in hiring and employment practice. Similarly, California’s Workplace Technology and Accountability Act seeks to regulate the day-to-day use of automated tools in the workplace. This blog post compares California’s proposed regulations to the EU’s AI Act.
On 21 April 2021, the European Commission proposed the EU AI Act, a first-of-its-kind set of harmonised rules to regulate the development and use of artificial intelligence. Following a lengthy drafting process during which multiple member states submitted compromise texts, the European Parliament passed the draft regulation on 14 June 2023.
The Act takes a risk-based approach, with AI systems classified into four categories based on the potential harms they pose – minimal, limited, high, or unacceptable risk. Systems with minimal risk – those which comprise the majority of the market, such as spam filters and AI-enabled video games – do not have any associated obligations.
Systems with limited risk are those that i) interact with humans, ii) detect humans or determine a person’s categorisation based on biometric data (if not prohibited), or iii) produce or manipulate text, audio, or visual content. The disclosure of AI or AI-generated content must also be upheld in limited-risk systems – which include chatbots and those used to produce deep fakes – to inform users about their interaction with artificial intelligence.
High-risk systems, those which have the potential to significantly impact the life chances of a user, have the most obligations. Annex III lists eight types of systems that fall into this category:
These systems are considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Providers can, however, notify relevant supervisory authorities if they do not perceive that the system poses significant risks.
Systems with unacceptable levels of risk are prohibited from being made available on the EU market. This includes systems that deploy subliminal techniques or exploit vulnerabilities of specific groups; systems used for social scoring; real-time biometric identification systems in public places; post remote biometric identification systems; systems to assess the risk of (re)offending; emotion recognition systems used by law enforcement or border management or in a workplace or educational setting; and systems used for indiscriminate and untargeted scraping of biometric data.
Under the Act, providers of AI systems established in the EU must comply with the regulation, as must those in third countries that place AI systems on the market in the EU and those in the EU that use AI systems. Before high-risk systems can be put on the EU market, they must undergo conformity assessments to meet legal obligations. Users will also have to establish AI risk management processes. Systems that comply with the Act and pass the conformity assessment must bear the CE logo and be registered on an EU database before they can be placed on the market. Following any significant changes to the system, such as if the model is retrained on new data or some features are removed from the model, the system must undergo additional conformity assessments to ensure that the requirements are still being met before being re-certified and re-registered in the database. Failure to comply could cost companies an estimated €200,000 - €400,000.
A history of strict data privacy and protection laws is at the core of AI regulation in California. Pending updates to state employment law regulate the use of algorithms and their capacity to discriminate against protected groups. Under the proposed amendments, any employer or third-party vendor that buys, sells, uses, or administers automated-decision systems (ADSs) or employment screening tools that automate decision-making is subject to compliance with the legislation. It is therefore prohibited from using ADSs that discriminate based on protected characteristics. The categories protected under the legislation include race, national origin, gender, accent, English proficiency, immigration status, driver’s license status, citizenship, height or weight, national origin, sex, pregnancy or perceived pregnancy, religion, and age unless they are shown to be job-related for the position in question and are consistent with business necessity.
Alternatively, the proposed Workplace Technology Accountability Act mandates specific risk management requirements; algorithmic impact assessments and data protection impact assessments are required for automated decision tools and worker information systems to identify risks such as discrimination or bias, errors, and violations of legal rights.
In addition, under the Act, workers have the right to request and correct any information an employer is collecting, storing, analysing or interpreting about them. The legislation, therefore, states that employers are prohibited from processing data about employees unless the data are strictly necessary for an essential job function.
Overwhelmingly, both of California’s proposed laws are narrowly focused mainly on automated employment decision tools used in recruiting, hiring, promotion and work monitoring. While the movement has gained traction in regulating AI systems used in hiring and employment-related decisions, the EU AI Act is far more expansive, taking a sector-agnostic approach and banning certain unacceptable technologies, such as social scoring.
Separately, the European Commission has taken the opportunity to require conformity assessments for high-risk systems. This approach to regulation departs from other national strategies by introducing a mandatory CE- marking procedure with a layered approach to enforcement. Like the conformity assessments required by the EU AI Act, the Workplace Technology Accountability Act requires data protection impact assessments of worker information systems and algorithmic impact assessments of automated decision systems, which can help ensure compliance with the legislation requirements and inform risk management strategies.
Both Acts also require ongoing monitoring, are re-evaluation when significant changes are made to the system. However, a critical difference between the assessments required by these acts is that nothing in the EU AI Act specifies that third parties must carry out conformity assessments. In contrast, Californian impact assessments must be carried out by a third party with the relevant experience and expertise.
Similarly, California and the EU have strict notification obligations, placing employee rights of action at the top of mind. For example, under EU requirements, the law mandates that people be notified when they encounter biometric recognition systems or AI applications that claim to be able to read their emotions. Taking a slight departure but aligned nonetheless, California compels employers to notify workers when electric monitoring of automated systems occurs in the workplace, only permitted upon job necessity.
AI’s fast-evolving and dynamic nature necessitate a forward-looking approach to anticipate and prepare for an uncertain future. Given AI’s rapid development and increased applications, AI risks are constantly changing, further complicating AI risk assessment and mitigation efforts. Left unchecked, biased, or inaccurate algorithmic decision-making can perpetuate existing structures of inequality and lead to discrimination causing severe ethical, legal, and reputational harm.
While California has taken a less prescriptive approach by narrowly focusing on AI regulation within the context of employment law, the EU AI Act is set to lead the way in responsible AI governance with its extra-territorial scope. Holistic AI’s risk management platform can help enterprises catalogue their AI systems, identify risks, and recommend steps to mitigate them. The platform operates based on five risk verticals, which cover all of the obligations of high-risk systems under the EU AI Act:
Once the conformity assessment has been completed on the Holistic AI platform, we provide compliance certification. Additionally, our platform can continually monitor a system after deployment and re-issue certification following any significant changes to the system.
To find out more, get in touch with a member of our team at we@holisticai.com.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts