🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

California’s AB 331 Automated Decision Tools Bill: 10 Things You Need to Know

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Jun 26, 2023
read time
0
min read
share this
California’s AB 331 Automated Decision Tools Bill: 10 Things You Need to Know

California is among the states leading the US’ efforts to regulate AI to make it safer and fairer, proposing legislation to limit workplace monitoring and modifications to employment regulations to address the use of automated-decision systems. However, its latest initiative takes a broader approach, seeking to regulate tools used to make consequential life decisions.

Indeed, California assembly member Rebecca Bauer-Kahan introduced AB-331 in January 2023 to prohibit the use of automated decision tools that contribute to results in algorithmic discrimination. This is defined as differential treatment or impact that disfavours people based on their actual or perceived race, colour, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state law. In this blog post, we outline 10 things you need to know about AB-331.

Interestingly, before being revised, the Bill prohibited the use of automated decisions tools that contributed to discrimination, with the revisions narrowing the activities that are prohibited. The Bill has been held under submission so is unlikely to make it out of the committee in the coming months, but it would have significant implications for both developers and deployers if passed. In this blog post, we outline 10 things you need to know about AB-331.

1. How does AB 331 define an automated decision tool?

According to AB 331, an automated decision tool (ADT) is a system or service that uses artificial intelligence that has been developed, marketed, or modified to make or influence consequential decisions.

Artificial intelligence is defined as a machine-based system that makes predictions, recommendations, or decisions that influence a real or virtual environment for a given set of human-defined objectives.

2. What tools are targeted by AB 331?

AB-331 targets tools used to make consequential decisions, which have a legal, material, or other significant effect on an individual’s life in terms of the impact of, access to, or cost, terms, or availability of the following:

AB-331 targets tools

3. What are the impact assessment requirements of AB 331 for deployers?

A deployer is an entity that uses an automated decision tool to make a consequential decision.

From 1 January 2025, deployers of ADTs must annually perform an impact assessment of tools used to make consequential decisions. This must include:

  • A statement of the purpose of the ADT and its intended benefits, use, and deployment contexts
  • A description of the ADT’s outputs and how they are used to make or influence a consequential decision
  • A summary of the type of data collected and processed by the ADT when it is used in a consequential decision
  • A statement of how much the deployer’s use of the ADT is consistent with the statement of its use
  • An analysis of potential adverse impacts on the basis of sex (includes pregnancy, childbirth, and related conditions, gender identity, intersex status, and sexual orientation), race, colour, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information
  • A description of the safeguards implemented by the deployer to address foreseeable risks of discrimination from the use of the tool
  • A description of how the ADT will be used, or monitored when it is used, to make or influence a consequential decision
  • A description of how the ADT has been or will be evaluated for validity or relevance

An additional impact assessment must be carried out in the event of a significant update to the system. This includes a new version, new release or other update that changes its use case, key functionality, or expected outcomes.

4. What are the impact assessment requirements for developers?

Developers of ADTs are entities that design, code, or produce or modify an ADT for making or influencing consequential decisions, whether for own use or third-party use.

From 1 January 2025, developers of ADTs must annually complete and document an assessment of an ADT that it designs, codes, or produces that includes all of the following:

  • A statement of the purpose of the ADT and its intended benefits, use, and deployment contexts
  • A description of the ADT’s outputs and how they are used to make or influence a consequential decision
  • A summary of the type of data collected and processed by the ADT when it is used in a consequential decision
  • An analysis of potential adverse impacts on the basis of sex (includes pregnancy, childbirth, and related conditions, gender identity, intersex status, and sexual orientation), race, colour, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information
  • A description of the safeguards implemented by the developer to address foreseeable risks of discrimination from the use of the tool
  • A description of how the ADT can be used, or monitored when used, to make or influence a consequential decision

An additional assessment must also be carried out following any significant update to the ADT.

5. What are the notification requirements for deployers?

When or before an ADT is used to make a consequential decision, deployers must notify any natural person affected by the decision that an ADT is being used. This should include:

  • A statement of the purpose of the ADT
  • Contact information for the deployer
  • A plain language description of the ADT that includes an outline of any human components and how any automated component is used to inform a consequential decision

If a consequential decision is made solely on the output of the ADT, a deployer shall accommodate a request for an alternative procedure or accommodation, where possible. Deployers may reasonably request, collect, and process information for the purpose of identifying the person and the associated decision. If this information is not provided, the deployer is not obligated to provide an alternative process or accommodation.

6. What information do developers have to provide deployers?

Developers must provide deployers with a statement of the intended use of the ADT and documentation regarding the known limitations of the tool, including the risk of algorithmic discrimination; a description of the type of data used to program or train the ADT; and an explanation of how the ADT was evaluated for validity and explainability before sale or licensing. The disclosure of trade secrets, as defined in Section 3426.1 of the Civil Code, is not required.

7. What are the requirements in relation to governance programs?

A deployer or developer of an ADT must establish, document, implement, and maintain a governance program with reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination from the use of the ADT.

The safeguards must be appropriate to the use or intended use of the ADT; the deployer’s or developer’s role as a deployer or developer; the size, complexity, and resources of the deployer or developer; the nature, context, and scope of the activities of the deployer or developer concerning the ADT; and the technical feasibility and cost of tools, assessments, and other strategies to map, measure, manage, and govern risks.

The governance program in general must designate at least one employee to be responsible for overseeing and maintaining the governance program and legal compliance. This person should have the authority to assert to their employer a good faith belief that the design, production, or use of the ADT fails to comply, and the employer should conduct a prompt and complete assessment of any compliance issues raised.

The program should also:

  • Identify and implement safeguards to address reasonably foreseeable risks of algorithmic discrimination
  • If established by a deployer, provide for the performance of impact assessments
  • If established by a developer, provide for compliance.
  • Conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance
  • Maintain the results of an impact assessment for two years after its completion
  • Evaluate and make reasonable adjustments to administrative and technical safeguards following changes in the technology, risks, technical standards, business arrangements, or operations

8. What are the requirements regarding an artificial intelligence policy?

A developer or deployer must make publicly available, in a readily accessible way, a clear artificial intelligence policy that provides a summary of:

  • The types of ADTs currently in use or made available to others by the developer or deployer
  • How the deployer or developer manages the foreseeable risks of algorithmic discrimination that may arise from the use of the ADTs it uses or makes available

9. What are the penalties for non-compliance?

Within 60 days of completing a required impact assessment, a deployer or developer must provide the assessment to the Civil Rights Department. Failure to do so could see them liable for an administrative fine of up to $10,000 per violation. Each day that the impact assessment is not submitted is a distinct violation.

Additionally, civil action can be brought against a deployer or developer by the following public attorneys authorised by the bill: the Attorney General, a district attorney, county counsel, a city attorney for the district the violation occurred in, or a city prosecutor (in cities that have a full-time city prosecutor) with the district attorney’s consent. A court may award injunctive relief, declaratory relief, reasonable attorney’s fees and litigation costs.

Forty five days' written notice of the alleged violation must be provided before commencing civil action. If the deployer or developer has mitigated the noticed violation and provides the plaintiff with an express written statement made under penalty of perjury that the violation has been cured and no violations will occur, a claim for injunctive relief will not be awarded.

10. Are there any exemptions?

Impact assessment and governance program requirements do not apply to developers with fewer than 25 employees or if the ADT impacts more than 999 people per year as of the end of the prior calendar year.

Get compliant!

HR Tech is increasingly being targeted by AI regulation around the world, particularly in the EU and US. Taking steps early is the best way to ensure you are compliant. Get in touch at we@holisticai.com to find out more about how Holistic AI can help you with AI risk, Governance, and Compliance.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo