🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Bias Audit Laws in the US: The State of Play for Automated Employment Decision Tools

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Jan 15, 2024
read time
0
min read
share this
Bias Audit Laws in the US: The State of Play for Automated Employment Decision Tools

The past decade has seen significant advancements in how candidates are evaluated for employment, with several innovative methods being developed that take advantage of technologies such as artificial intelligence and machine learning. While these tools can transform candidate experiences and make them more engaging and immersive, they can also pose novel risks and exacerbate bias and discrimination if the appropriate safeguards and mitigations are not in place.

In response, lawmakers - particularly in the US - are increasingly proposing laws seeking to impose requirements for the use of so-called automated employment decision tools (AEDTs). The first law of this type was New York City Local Law 144, which imposed bias audit requirements for AEDTs. Other states have since followed New York City’s lead.

In this blog post, we give an overview of how New York City set the precedent for AEDT bias audits and examine what other U.S. states are developing their own bias audit laws.

Key takeaways:

  • New York City Local Law 144 has been in force since July 5, 2023 and set the precedent for bias audits of AEDTs along with rules for notifying the public about these checks and sharing a summary of the findings
  • Pennsylvania proposed a similar law to New York City 144, HB1729, although mechanisms for consent vary
  • New Jersey’s S1588 is reminiscent of Local Law 144 but targets vendors instead of employers or employment agencies and again has a third approach to consent
  • New York State has introduced two laws, AB567 and S7623, requiring bias audits or automated employment decision tools, although their approaches vary
  • New York has also introduced two more laws with requirements for AEDTs, but these do not require bias audits
 Bias Audit Laws in the US

How did New York City set the precedent for bias audits?

Bias audits of automated employment decision tools have been required in New York City under Local Law 144 since July 5, 2023, when enforcement by the Department for Consumer Protection (DCWP) began. Under the law, employers or employment agencies using automated employment decision tools to evaluate candidates for employment or employees for promotion are required to obtain annual independent, impartial evaluations of their tools.

The law defines automated employment decision tools as:

“any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural person”

Terms such as ‘machine learning’, ‘simplified output’, and ‘substantially assist or replace discretionary decision making’ have been defined by the DCWP in its final rules for enforcement.

What are the rules for conducting NYC bias audits?

Bias audits must be performed in accordance with the metrics set out by the DCWP. The metric of most importance is the impact ratio, which may be determined in multiple ways depending on the type of system you are testing.

For classification systems - or those that result in a binary outcome - the selection rate for each group must be calculated based on how many individuals in that category are designated to the positive condition. This rate must then be divided by the selection rate of the category with the highest rate:


Selection rate for a category

Selection rate of the most selected category

For regression systems, or those that result in a continuous outcome, the scoring rate must be calculated by binarizing the data based on whether individuals score above or below the median score for the dataset. Like the metric for categorical tools, the selection rate must then be divided by the rate of the group with the greatest rate:


Scoring rate for a category

Scoring rate for the highest scoring category

Impact ratios must be calculated for both standalone groups and intersectional groups, or for sex/gender and race/ethnicity groups alone and for the intersection between sex/gender and race/ethnicity groups.

Specifically, the race/ethnicity groups that must be examined are:

  • Hispanic or Latino – This category includes individuals of Cuban, Mexican, Puerto Rican, South or Central American, or other Spanish culture or origin, regardless of their race.
  • White (Not Hispanic or Latino)- Includes people with origins from Europe, the Middle East, or North Africa.
  • Black or African American (Not Hispanic or Latino) – Individuals that have origins in any of the black racial groups of Africa.
  • Native Hawaiian or Other Pacific Islander (Not Hispanic or Latino) – Anyone with origins in Hawaii, Guam, Samoa, or other Pacific Island.
  • Asian (Not Hispanic or Latino) – Individuals with origins in the Far East, Southeast Asia, or the Indian Subcontinent. This includes Cambodia, China, India, Japan, Korea, Malaysia, Pakistan, the Philippine Islands, Thailand, and Vietnam.
  • American Indian or Alaska Native – People with origins in any of the original peoples of North and South America (including Central America) who maintain tribal affiliation or community attachment.
  • Two or more races – anyone who identifies with more than one race. This does not include those identifying as Hispanic or Latino.

What are the transparency requirements under New York City Local Law 144?

As well as commissioning an annual independent bias audit, employers and employment agencies must publicly publish a summary of the results of the bias audit on their website before using the tool.

This public summary must include:

  • The source and explanation of the data used to conduct the bias audit
  • The number of applicants in each category
  • The number of individuals assessed by the AEDT that were not included in the calculations due to missing demographic data
  • The distribution date of the tool and date of the audit
  • Whether any categories were excluded from the analysis due to small sample size
  • The impact ratios for standalone and intersectional groups

It’s worth noting that auditors can exclude groups with a small sample size  (less than 2% of the audit data) from the analysis but must still calculate the scoring rate or selection rate of the group.

In addition to the publication of the summary of results, employers and employment agencies using AEDTS must also inform candidates and employees of the use of an AEDT at least ten business days before it is used to evaluate them.

This notice should include information on how the AEDT will be used, the job qualifications and characteristics they will consider, the type and sort of data collected about them, how to request accommodations alternative selection procedures, and the AEDT data retention policy.

Pennsylvania’s Bias Audit Law

Seemingly inspired by New York City’s law, Pennsylvania has proposed its own AEDT bias audit law. First proposed in September 2023, HB1729 defines an automated employment decision tool as:

 “any system the function of which is governed by statistical theory, or systems the parameters of which are defined by systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests and other learning algorithms, which automatically filter individuals or prospective individuals for employment or for any term, condition or privilege of employment in a way that establishes a preferred individual or individuals”

In contrast to Local Law 144, HB1729 covers employment or promotion decisions as well as any other decisions made about compensation, terms, conditions, or privileges of employment, therefore not limiting the scope solely to tools used in employment or promotion decisions.

Another difference between the NYC and Pennsylvania laws is that the Pennsylvania law requires employers or employment agencies to obtain explicit consent for the use of an AEDT before use. This contrasts with the NYC law which seemingly assumes consent but provides instructions on how to notify applicants on how to request an alternative procedure.

Similarly to Local Law 144, the Pennsylvania bias audit law requires annual, independent bias audits of AEDTS and the publication of a summary of the results of the bias audit and notification.

New Jersey’s Bias Audit Law

A third slightly different take on bias audit requirements has surfaced in New Jersey.

While the NYC and Pennsylvania laws target employers and employment agencies, the New Jersey law targets the vendors that provide AEDTs.

First introduced in December 2022, NJ A4909 (senate equivalent S1926) died in committee at the start of the 2024 session but was carried over by S1588, which was introduced on January 9, 2024. The bill defines an AEDT, similarly to Pennsylvania, as:

“any system the function of which is governed by statistical theory, or systems the parameters of which are defined by systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests, and other learning algorithms, which automatically filters candidates or prospective candidates for hire or for any term, condition or privilege of employment in a way that establishes a preferred candidate or candidates.”

The bill seeks to make it unlawful to use an automated employment decision tool unless it has been subject to an impartial bias audit within the past year. Additionally, the annual bias audit provided by the vendor cannot be passed on as an additional cost to the end users.

Those using an AEDT to screen a candidate for employment must notify candidates of the use of the tool and the fact that the tool was used to assess job qualifications or characteristics within 30 days. This is yet another approach to consent, where notification may be given after the tool is used instead of in advance.

Another difference between the New Jersey law and other bias audit laws is that it does require the publication of a summary of audit results.

Like the text of the Pennsylvania and New York City laws, the New Jersey law does not give much information on the form that a bias audit should take, but that it should be used to examine predicted compliance with the law against discrimination.

New York bias audit laws

Finally, the New York City law has seemingly inspired a desire to push similar legislations for the state as a whole. New York state presently has multiple laws proposed that require bias audits of automated employment decision tools.

New York AB567

AB567 was first introduced on January 9, 2023, seeking to introduce requirements for AEDTS, which it defines as:

“any system used to filter employment candidates or prospective candidates for hire in a way that establishes a preferred candidate or candidates without relying on candidate-specific assessments by individual decision-makers. Automated employment decision tools shall include personality tests, cognitive ability tests, resume scoring systems and any system whose function is governed by statistical theory, or whose parameters are defined by such systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests and other artificial intelligence or machine learning algorithms”

AB567 uses different terminology than other bias audit laws, requiring annual impartial disparate impact analyses of AEDTs. However, the analyses have similar requirements to the other laws. The AEDT audit results should examine adverse impact against any group based on sex, race, ethnicity, or other protected classes.

Additionally, the law requires that a summary of the results of the analysis should be made publicly available on the website of the employer or employment agency using the tool, although there are no notification requirements.

The law also requires that employers or employment agencies using AEDTs provide the department with a summary of the most recent disparate impact analysis and permits the attorney general and commissioner to carry out investigations of suspicions of violations.

New York S7623

More recently in August 2023, New York introduced S7623 to restrict the use of electronic monitoring and automated employment decision tools by employers and employment agencies, where an AEDT is defined similarly to Local Law 144 as:

“any computational process, automated system, or algorithm utilizing machine learning, statistical modeling, data analytics, artificial intelligence, or similar methods that issues a simplified output, including a score, classification, ranking, or recommendation, that is used to assist or replace decision making for employment decisions that impact natural persons”

Here, employment decisions cover wages, benefits, other compensation, hours, work schedule, performance evaluation, hiring, selecting for recruitment, discipline, promotion, termination, job content, assignment of work, access to work opportunities, productivity requirements, workplace health and safety, and other terms or conditions of employment.

As well as setting out requirements reminiscent of the now-dead California Workplace Technology Accountability Act, S7623 makes it illegal to use an AEDT unless it has been subject to an independent, impartial bias audit within the past year. However, the bias audits required by the New York Law go beyond previously proposed requirements by requiring the audit to identify and describe attributes and modeling techniques that the tool uses to produce outputs. Users must also note whether they are scientifically valid ways of evaluating performance or the ability to perform essential job functions.

Bias audits must also examine whether these attributes could function as a proxy for protected characteristics, as well as evaluate the training data for any disparities and how they might result in disparate impact. Moreover, the bias audits should recommend any necessary mitigations and evaluate how the tool may impact accessibility for those with disabilities. Employers and employment agencies must support these efforts by retaining pertinent documentation.

As well as a comprehensive audit, S7623 requires notifications like those required by Local Law 144. Notifications must be given to employees and candidates at least ten business days before the use of the tool, in addition to meaningful human oversight of the use of AEDTs.

Other New York AEDT Laws

New York has also proposed additional laws seeking to regulate the use of AEDTs used in the state that are unrelated to bias audits:

  • AB7859 imposes notification requirements for the use of AEDTs in New York so could be coupled with AB567 to form an equivalent to New York City Local Law 144 in two halves
  • S5641 Establishes criteria for the use of automated employment decision tools to limit bias and discrimination against any group based on sex, race, ethnicity, or other protected class by requiring impact assessments to evaluate the reasonably foreseeable risk of unlawful discrimination resulting from the use of an AEDT. This law has some parallels with California’s AB331.

Bias audits for automated employment decision tools are increasingly common

There is likely to be an increase in the requirement for bias audits of automated employment decision tools across the US within the next couple of years.

While they all seek to reduce the potential for bias and discrimination resulting from AEDTs, they do so in different ways. Organizations staying ahead of regulatory risk are seeking out a deep understanding of differing laws and striving towards more robust AI governance as a whole.

By coupling news monitoring around regulations, automated inventorying and bias assessments, and anti-bias development toolkits, teams are not only staying safer but driving more meaningful AI tool adoption through trust.

Speak to an analyst to explore what you can gain from the world’s first 360-degree AI governance, regulatory, and compliance platform today.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo