June 6th, 2022
California Automated Systems Regulation Primer
Purpose of the legislation
- The proposed amendment to the California Housing and Employment Law is intended to prohibit unfair discrimination in employment recruitment processes due to the harmful use of AI.
- It establishes regulations prohibiting discrimination related to specific characteristics of individuals such as: national origin and ancestry; sex; marital status; religious creed; disability; and age.
Duties on companies
- Prohibition of unfair discrimination of individuals in recruitment process carried out by AI systems on the basis of a protected characteristics. Such practices are declared unlawful, and therefore lead to liability for the employer.
- Keep records of information affecting employees, including all the data obtained through machine-learning processes for the period of 4 years.
- Companies and agents utilizing AI in recruitment processes must maintain records of the assessment criteria used by the AI system for each such employer or covered entity to whom the automated decision system is provided.
Who’s in scope?
- Obligations fall on both the employer and other agent involved in the process (e.g. recruitment agencies)
- Legislation applies to employers, who are defined as any person or individual in any business or enterprise who regularly employs five or more individuals.
- Legislation also applies to “Agents” who is any person acting on behalf of employers, including but not limited to a, e.g. recruitment agencies, that provides AI based services for anemployer’s evaluation or decision makingregarding torecruitment, hiring, performance evaluation or other employment-related assessments.
- For record keeping purposes, legislation also applies to any person who engages the advertisement, sale, provision or use of a selection tool including automated-decision systems.
What’s the risk?
- Companies can be investigated by the California’s Department of Housing and Employment, which could eventually initiate civil action in Court.
- Complaints regarding unlawful practices can be made by an individual, but could also derive to class actions when there is evidence that a group or class or persons have been affected.
Strategic evaluation and opportunities
- Covered entities will have to engage in complex assessments to assure the use of AI does not fall under the unlawful practices.
- Compliance programs based on algorithm accountability processes, are well-equipped to ground affirmative defenses in case of complaints.
- Compliance offers opportunities to refine algorithms to allow companies not to lose talent through unconscious biases.
CERRA compliance with HolisticAI
- Although there is no obligation to develop algorithm’s impact assessment, it is highly recommended for covered entities to perform such processes in order to avoid the risk of liability.
- HolisticAI has completed more audits than any competitor and is a pioneer in the field of AI auditing.
- HolisitcAI has published research on, and has advised companies working with, AI use in recruitment processes.
- HolisticAI’s audits assess bias, efficacy, robustness, explainability, and privacy–the key risk vectors listed by the AAA–to provide granular analyses of AI risk.
- HolisticAI specializes in adaptive strategies that mitigate risk and optimize company’ AI capabilities.
- Our on-going monitoring and support ensures that companies can remain confident in their AI deployment.
- HolisticAI’s specialized law and governance team is at the forefront of AI regulatory debates, publishing widely on US, UK, and EU legislation for AI, and preparing clients well in advance of regulation.