The past decade has seen significant advancements in how candidates are evaluated for employment, with several innovative methods being developed that take advantage of technologies such as artificial intelligence and machine learning. While these tools can transform candidate experiences and make them more engaging and immersive, they can also pose novel risks and exacerbate bias and discrimination if the appropriate safeguards and mitigations are not in place.
In response, lawmakers - particularly in the US - are increasingly proposing laws seeking to impose requirements for the use of so-called automated employment decision tools (AEDTs). The first law of this type was New York City Local Law 144, which imposed bias audit requirements for AEDTs. Other states have since followed New York City’s lead.
In this blog post, we give an overview of how New York City set the precedent for AEDT bias audits and examine what other U.S. states are developing their own bias audit laws.
Key takeaways:
Bias audits of automated employment decision tools have been required in New York City under Local Law 144 since July 5, 2023, when enforcement by the Department for Consumer Protection (DCWP) began. Under the law, employers or employment agencies using automated employment decision tools to evaluate candidates for employment or employees for promotion are required to obtain annual independent, impartial evaluations of their tools.
The law defines automated employment decision tools as:
“any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural person”
Terms such as ‘machine learning’, ‘simplified output’, and ‘substantially assist or replace discretionary decision making’ have been defined by the DCWP in its final rules for enforcement.
Bias audits must be performed in accordance with the metrics set out by the DCWP. The metric of most importance is the impact ratio, which may be determined in multiple ways depending on the type of system you are testing.
For classification systems - or those that result in a binary outcome - the selection rate for each group must be calculated based on how many individuals in that category are designated to the positive condition. This rate must then be divided by the selection rate of the category with the highest rate:
For regression systems, or those that result in a continuous outcome, the scoring rate must be calculated by binarizing the data based on whether individuals score above or below the median score for the dataset. Like the metric for categorical tools, the selection rate must then be divided by the rate of the group with the greatest rate:
Impact ratios must be calculated for both standalone groups and intersectional groups, or for sex/gender and race/ethnicity groups alone and for the intersection between sex/gender and race/ethnicity groups.
Specifically, the race/ethnicity groups that must be examined are:
As well as commissioning an annual independent bias audit, employers and employment agencies must publicly publish a summary of the results of the bias audit on their website before using the tool.
This public summary must include:
It’s worth noting that auditors can exclude groups with a small sample size (less than 2% of the audit data) from the analysis but must still calculate the scoring rate or selection rate of the group.
In addition to the publication of the summary of results, employers and employment agencies using AEDTS must also inform candidates and employees of the use of an AEDT at least ten business days before it is used to evaluate them.
This notice should include information on how the AEDT will be used, the job qualifications and characteristics they will consider, the type and sort of data collected about them, how to request accommodations alternative selection procedures, and the AEDT data retention policy.
Seemingly inspired by New York City’s law, Pennsylvania has proposed its own AEDT bias audit law. First proposed in September 2023, HB1729 defines an automated employment decision tool as:
“any system the function of which is governed by statistical theory, or systems the parameters of which are defined by systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests and other learning algorithms, which automatically filter individuals or prospective individuals for employment or for any term, condition or privilege of employment in a way that establishes a preferred individual or individuals”
In contrast to Local Law 144, HB1729 covers employment or promotion decisions as well as any other decisions made about compensation, terms, conditions, or privileges of employment, therefore not limiting the scope solely to tools used in employment or promotion decisions.
Another difference between the NYC and Pennsylvania laws is that the Pennsylvania law requires employers or employment agencies to obtain explicit consent for the use of an AEDT before use. This contrasts with the NYC law which seemingly assumes consent but provides instructions on how to notify applicants on how to request an alternative procedure.
Similarly to Local Law 144, the Pennsylvania bias audit law requires annual, independent bias audits of AEDTS and the publication of a summary of the results of the bias audit and notification.
A third slightly different take on bias audit requirements has surfaced in New Jersey.
While the NYC and Pennsylvania laws target employers and employment agencies, the New Jersey law targets the vendors that provide AEDTs.
First introduced in December 2022, NJ A4909 (senate equivalent S1926) died in committee at the start of the 2024 session but was carried over by S1588, which was introduced on January 9, 2024. The bill defines an AEDT, similarly to Pennsylvania, as:
“any system the function of which is governed by statistical theory, or systems the parameters of which are defined by systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests, and other learning algorithms, which automatically filters candidates or prospective candidates for hire or for any term, condition or privilege of employment in a way that establishes a preferred candidate or candidates.”
The bill seeks to make it unlawful to use an automated employment decision tool unless it has been subject to an impartial bias audit within the past year. Additionally, the annual bias audit provided by the vendor cannot be passed on as an additional cost to the end users.
Those using an AEDT to screen a candidate for employment must notify candidates of the use of the tool and the fact that the tool was used to assess job qualifications or characteristics within 30 days. This is yet another approach to consent, where notification may be given after the tool is used instead of in advance.
Another difference between the New Jersey law and other bias audit laws is that it does require the publication of a summary of audit results.
Like the text of the Pennsylvania and New York City laws, the New Jersey law does not give much information on the form that a bias audit should take, but that it should be used to examine predicted compliance with the law against discrimination.
Finally, the New York City law has seemingly inspired a desire to push similar legislations for the state as a whole. New York state presently has multiple laws proposed that require bias audits of automated employment decision tools.
AB567 was first introduced on January 9, 2023, seeking to introduce requirements for AEDTS, which it defines as:
“any system used to filter employment candidates or prospective candidates for hire in a way that establishes a preferred candidate or candidates without relying on candidate-specific assessments by individual decision-makers. Automated employment decision tools shall include personality tests, cognitive ability tests, resume scoring systems and any system whose function is governed by statistical theory, or whose parameters are defined by such systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests and other artificial intelligence or machine learning algorithms”
AB567 uses different terminology than other bias audit laws, requiring annual impartial disparate impact analyses of AEDTs. However, the analyses have similar requirements to the other laws. The AEDT audit results should examine adverse impact against any group based on sex, race, ethnicity, or other protected classes.
Additionally, the law requires that a summary of the results of the analysis should be made publicly available on the website of the employer or employment agency using the tool, although there are no notification requirements.
The law also requires that employers or employment agencies using AEDTs provide the department with a summary of the most recent disparate impact analysis and permits the attorney general and commissioner to carry out investigations of suspicions of violations.
More recently in August 2023, New York introduced S7623 to restrict the use of electronic monitoring and automated employment decision tools by employers and employment agencies, where an AEDT is defined similarly to Local Law 144 as:
“any computational process, automated system, or algorithm utilizing machine learning, statistical modeling, data analytics, artificial intelligence, or similar methods that issues a simplified output, including a score, classification, ranking, or recommendation, that is used to assist or replace decision making for employment decisions that impact natural persons”
Here, employment decisions cover wages, benefits, other compensation, hours, work schedule, performance evaluation, hiring, selecting for recruitment, discipline, promotion, termination, job content, assignment of work, access to work opportunities, productivity requirements, workplace health and safety, and other terms or conditions of employment.
As well as setting out requirements reminiscent of the now-dead California Workplace Technology Accountability Act, S7623 makes it illegal to use an AEDT unless it has been subject to an independent, impartial bias audit within the past year. However, the bias audits required by the New York Law go beyond previously proposed requirements by requiring the audit to identify and describe attributes and modeling techniques that the tool uses to produce outputs. Users must also note whether they are scientifically valid ways of evaluating performance or the ability to perform essential job functions.
Bias audits must also examine whether these attributes could function as a proxy for protected characteristics, as well as evaluate the training data for any disparities and how they might result in disparate impact. Moreover, the bias audits should recommend any necessary mitigations and evaluate how the tool may impact accessibility for those with disabilities. Employers and employment agencies must support these efforts by retaining pertinent documentation.
As well as a comprehensive audit, S7623 requires notifications like those required by Local Law 144. Notifications must be given to employees and candidates at least ten business days before the use of the tool, in addition to meaningful human oversight of the use of AEDTs.
New York has also proposed additional laws seeking to regulate the use of AEDTs used in the state that are unrelated to bias audits:
There is likely to be an increase in the requirement for bias audits of automated employment decision tools across the US within the next couple of years.
While they all seek to reduce the potential for bias and discrimination resulting from AEDTs, they do so in different ways. Organizations staying ahead of regulatory risk are seeking out a deep understanding of differing laws and striving towards more robust AI governance as a whole.
By coupling news monitoring around regulations, automated inventorying and bias assessments, and anti-bias development toolkits, teams are not only staying safer but driving more meaningful AI tool adoption through trust.
Speak to an analyst to explore what you can gain from the world’s first 360-degree AI governance, regulatory, and compliance platform today.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts