California is among the states leading the US’ efforts to regulate AI to make it safer and fairer, proposing legislation to limit workplace monitoring and modifications to employment regulations to address the use of automated-decision systems. However, its latest initiative takes a broader approach, seeking to regulate tools used to make consequential life decisions.
Indeed, California assembly member Rebecca Bauer-Kahan introduced AB-331 in January 2023 to prohibit the use of automated decision tools that contribute to results in algorithmic discrimination. This is defined as differential treatment or impact that disfavours people based on their actual or perceived race, colour, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state law. In this blog post, we outline 10 things you need to know about AB-331.
Interestingly, before being revised, the Bill prohibited the use of automated decisions tools that contributed to discrimination, with the revisions narrowing the activities that are prohibited. The Bill has been held under submission so is unlikely to make it out of the committee in the coming months, but it would have significant implications for both developers and deployers if passed. In this blog post, we outline 10 things you need to know about AB-331.
According to AB 331, an automated decision tool (ADT) is a system or service that uses artificial intelligence that has been developed, marketed, or modified to make or influence consequential decisions.
Artificial intelligence is defined as a machine-based system that makes predictions, recommendations, or decisions that influence a real or virtual environment for a given set of human-defined objectives.
AB-331 targets tools used to make consequential decisions, which have a legal, material, or other significant effect on an individual’s life in terms of the impact of, access to, or cost, terms, or availability of the following:
A deployer is an entity that uses an automated decision tool to make a consequential decision.
From 1 January 2025, deployers of ADTs must annually perform an impact assessment of tools used to make consequential decisions. This must include:
An additional impact assessment must be carried out in the event of a significant update to the system. This includes a new version, new release or other update that changes its use case, key functionality, or expected outcomes.
Developers of ADTs are entities that design, code, or produce or modify an ADT for making or influencing consequential decisions, whether for own use or third-party use.
From 1 January 2025, developers of ADTs must annually complete and document an assessment of an ADT that it designs, codes, or produces that includes all of the following:
An additional assessment must also be carried out following any significant update to the ADT.
When or before an ADT is used to make a consequential decision, deployers must notify any natural person affected by the decision that an ADT is being used. This should include:
If a consequential decision is made solely on the output of the ADT, a deployer shall accommodate a request for an alternative procedure or accommodation, where possible. Deployers may reasonably request, collect, and process information for the purpose of identifying the person and the associated decision. If this information is not provided, the deployer is not obligated to provide an alternative process or accommodation.
Developers must provide deployers with a statement of the intended use of the ADT and documentation regarding the known limitations of the tool, including the risk of algorithmic discrimination; a description of the type of data used to program or train the ADT; and an explanation of how the ADT was evaluated for validity and explainability before sale or licensing. The disclosure of trade secrets, as defined in Section 3426.1 of the Civil Code, is not required.
A deployer or developer of an ADT must establish, document, implement, and maintain a governance program with reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination from the use of the ADT.
The safeguards must be appropriate to the use or intended use of the ADT; the deployer’s or developer’s role as a deployer or developer; the size, complexity, and resources of the deployer or developer; the nature, context, and scope of the activities of the deployer or developer concerning the ADT; and the technical feasibility and cost of tools, assessments, and other strategies to map, measure, manage, and govern risks.
The governance program in general must designate at least one employee to be responsible for overseeing and maintaining the governance program and legal compliance. This person should have the authority to assert to their employer a good faith belief that the design, production, or use of the ADT fails to comply, and the employer should conduct a prompt and complete assessment of any compliance issues raised.
The program should also:
A developer or deployer must make publicly available, in a readily accessible way, a clear artificial intelligence policy that provides a summary of:
Within 60 days of completing a required impact assessment, a deployer or developer must provide the assessment to the Civil Rights Department. Failure to do so could see them liable for an administrative fine of up to $10,000 per violation. Each day that the impact assessment is not submitted is a distinct violation.
Additionally, civil action can be brought against a deployer or developer by the following public attorneys authorised by the bill: the Attorney General, a district attorney, county counsel, a city attorney for the district the violation occurred in, or a city prosecutor (in cities that have a full-time city prosecutor) with the district attorney’s consent. A court may award injunctive relief, declaratory relief, reasonable attorney’s fees and litigation costs.
Forty five days' written notice of the alleged violation must be provided before commencing civil action. If the deployer or developer has mitigated the noticed violation and provides the plaintiff with an express written statement made under penalty of perjury that the violation has been cured and no violations will occur, a claim for injunctive relief will not be awarded.
Impact assessment and governance program requirements do not apply to developers with fewer than 25 employees or if the ADT impacts more than 999 people per year as of the end of the prior calendar year.
HR Tech is increasingly being targeted by AI regulation around the world, particularly in the EU and US. Taking steps early is the best way to ensure you are compliant. Get in touch at we@holisticai.com to find out more about how Holistic AI can help you with AI risk, Governance, and Compliance.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts