With almost 25% of organisations using artificial intelligence (AI) in their recruitment process, increasing to 42% for extra-large businesses, concerns are growing around the risks that this technology can pose. Fuelled by high-profile cases of harm resulting from the use of AI in talent management and acquisition (e.g., Amazon’s withdrawn resume screening tool), policymakers around the world are starting to target automated recruitment tools, imposing stringent obligations for their use.
Most of these efforts are centred in the US, with California proposing amendments to its employment regulations regarding automated decision systems, Illinois introducing the Artificial Intelligence Video Interview Act to require employers to inform candidates about the use of AI-judged video interviews and New York City and New Jersey both centring their efforts around bias audit requirements.
In this blog post, we compare New York City Local Law 144, which will require independent bias audits of automated employment decision tools from 5 July 2023, and the New Jersey Assembly Bill 4909, which has been recently introduced and will have similar requirements if passed.
Under NYC LL144, an automated employment decision tool (AEDT) is defined as:
“any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons”
The text then goes on to clarify that this does not include tools that are not used to automate, support, substantially assist or replace discretionary decision-making processes or if they do not impact natural persons, such as junk email filters and firewalls.
Rules recently published by The NYC Department of Consumer and Worker Protection (DCWP) clarify some of the key terms used in this definition, suggesting that to substantially assist or replace discretionary decision making means:
“(i) to rely solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered; (ii) to use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set; or (iii) to use a simplified output to overrule conclusions derived from other factors including human decision making.”
Additionally, machine learning, statistical modelling, data analytics, or artificial intelligence are defined as:
“a group of mathematical, computer based techniques: i. that generate a prediction, meaning an expected outcome for an observation, such as an assessment of a candidate’s fit or likelihood of success, or that generate a classification, meaning an assignment of an observation to a group, such as categorizations based on skill sets or aptitude; and ii. for which a computer at least in part identifies the inputs, the relative importance placed on those inputs, and other parameters for the models in order to improve the accuracy of the prediction or classification; and iii. for which the inputs and parameters are refined through cross-validation or by using training and testing data.”
And a simplified output is:
“a prediction or classification as specified in the definition for “machine learning, statistical modelling, data analytics, or artificial intelligence.” A simplified output may take the form of a score (e.g., rating a candidate’s estimated technical skills), tag or categorization (e.g., categorizing a candidate’s resume based on key words, assigning a skill or trait to a candidate), recommendation (e.g., whether a candidate should be given an interview), or ranking (e.g., arranging a list of candidates based on how well their cover letters match the job description).”
On the other hand, AB 4909 takes a less prescriptive approach, defining an automated employment decision tool as:
"any system the function of which is governed by statistical theory, or systems the parameters of which are defined by systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests, and other learning algorithms, which automatically filters candidates or prospective candidates for hire or for any term, condition or privilege of employment in a way that establishes a preferred candidate or candidates.”
A key distinction between these definitions is that the New Jersey proposal specifies models that will be regulated, with a focus on machine learning, while the NYC law takes a broader approach, regulating a range of systems without specifying particular technologies.
A key requirement of each law, a bias audit is defined by the New York City legislation as:
“an impartial evaluation by an independent auditor. Such bias audit shall include but not be limited to the testing of an automated employment decision tool to assess the tool’s disparate impact on persons of any component 1 category required to be reported by employers pursuant to subsection (c) of section 2000e-8 of title 42 of the United States code as specified in part 1602.7 of title 29 of the code of federal regulations.”
Where component 1 categories refer to the protected characteristics required to be reported to the Equal Employment Opportunity Commission (EEOC) by businesses each year. Specifically, these are sex/gender (male, female, and optionally other) and race/ethnicity (Hispanic or Latino, White, Black or African American, Native Hawaiian or Pacific Islander, Asian, Native American or Alaska Native, and two or more races.)
A key note here is that bias audits must be impartial and conducted by an independent entity. The DCWP’s rules specify that an independent auditor is someone that can exercise an impartial judgement and is not someone who: i) was involved in using, developing, or designing the AEDT, ii) during the audit, had an employment relationship with the employer or employment agency that seeks to (continue to) use the AEDT or with a vendor that developed or distributes the AEDT, or iii) has a direct or material indirect financial interest in the employer or employment agency that seeks to continue to use the AEDT or in a vendor that developed or distributed the AEDT.
On the other hand, the New Jersey proposal defines a bias audit as:
“an impartial evaluation, including but not limited to testing, of an automated employment decision tool to assess its predicted compliance with the provisions of the “Law Against Discrimination,” P.L. 1945, c. 169 (C. 10:5-1 et seq.), and any other applicable law relating to discrimination in employment.”
Where the New Jersey Law Against Discrimination prohibits individuals from being discriminated against based on their race, creed, colour, national origin, age, sex, or gender, for example.
Therefore, while both pieces of legislation explicate the need for impartial audits, the NYC law goes one step further requiring impartial audits to be conducted by an independent entity. However, the characteristics protected under the NYC law are much narrower.
With both pieces of legislation targeting employment decisions, according to NYC LL144, this means to:
“screen candidates for employment or employees for promotion within the city.”
According to AB 4909, an employment decision means to:
“screen candidates for employment or otherwise to help to decide compensation or any other terms, conditions or privileges of employment in the State.”
With both at least implying that the use of AEDTs in both hiring and promotional decisions is targeted for bias audits, the two definitions are mostly aligned. However, the NJ law is arguably more comprehensive in that it also covers compensation and privileges.
Under Local Law 144, it will be unlawful from 5 July 2023 for employers or employment agencies to use an AEDT to screen candidates for employment or employees for promotion unless the tool has been audited for bias no more than one year prior to the use of the tool, and a summary of the results of the most recent bias audit is publicly available on the website of the employer or employment agency prior to its use.
The rules proposed by the DCWP clarify how auditors must audit AEDTs for bias, with selection rates (the proportion of applications hired or selected to move forward in the hiring process) being compared for different subgroups for classification systems, and scoring rates (the proportion of applicants in each group scoring above the sample median) being compared for regression systems:
These impact ratios should be calculated for both standalone (i.e., male, female, Black, White, Asian etc.) and intersectional groups (i.e., Black females, Black males, White females, White males etc.). The calculation of these metrics should be based on historical or real-life data where possible, but test data can be used if the required data is not available. Here, data is collected from a sample of individuals who provide their demographic data and are evaluated by the AEDT. However, if test data is used, the summary of results should explain why historical data could not be used and how the test data was collected.
Impact ratios must be included in the summary of results, which can be fulfilled by providing a hyperlink to a webpage containing the required impact ratios providing the link is presented clearly and conspicuously. The distribution date of the tool must also be specified in the summary, along with the date of the most recent bias audit. When the AEDT is retired, the summary of results must remain on the employer or employment agency’s website for at least 6 months.
In contrast to these specific practices outlined by the NYC law, the New Jersey proposal does not specify any metrics that should be used to calculate bias. Instead, the Bill proposes simply to make it unlawful to use or sell an AEDT unless it has been audited within the past year, and that the sale of AEDTS should include an annual bias audit service at no extra cost. It also does not specify whether a summary of the results of the audit must be disseminated for the tool to be used legally.
In addition to the bias audit requirements, Local Law 144 also requires employers or employment agencies to notify candidates at least 10 business days before the use of the tool that an AEDT will be used to evaluate their application, the qualifications and characteristics it will use to make judgments, and allow them to request an accommodation or alternative selection process. If information about the type of data collected, the source of the data, and the data retention policy are not available on the website, candidates can make a written request for this information. Employers and employment agencies must comply with these requests within 30 days of receipt, except where doing so would violate local, state, or federal law or interfere with a law enforcement investigation.
The DCWP’s proposed rules clarify how this notification can be given:
Having less stringent notification requirements, AB 4909 only requires that candidates are notified about the use of the tool and the qualifications or characteristics that it considers to make decisions. This information must be provided within 30 days of the use of the tool, suggesting that it could be given either before or after the tool is used.
Under Local Law 144, civil penalties for non-compliance start at $500 for the first violation and each violation occurring on the same day, rising to up to $1500 for subsequent violations. The failure to provide notice and commission a bias audit are separate violations, meaning dual penalties can be incurred by employers failing to comply with both of these requirements.
Similarly, violations of AB 4909 can see employers face penalties of up to $500 for the first violation and each additional violation on the same day, and up to $1500 for subsequent violations. Like with Local Law 144, failure to provide notice and obtain a bias audit are separate violations.
Initially due to go into effect on 1 January 2023, the enforcement date of Local Law 144 was postponed to 15 April 2023 before being postponed again to 5 July 2023 due to the adoption of the final rules. In contrast, the New Jersey law will take effect immediately after being enacted.
New York City Local Law 144 and New Jersey Assembly Bill A4909 both address the issue of combating bias in hiring practices by requiring the use of bias audits, however they each take a different approach. Local Law 144 focuses on the testing of automated employment decision tools (AEDTs) to assess their disparate impact on protected classes, while the New Jersey Assembly Bill prohibits the sale of these tools unless certain conditions are met.
For example, Local law 144 requires an impartial evaluation by an independent auditor of the AEDT, while the New Jersey law requires the sale of the tool to include an annual bias audit service at no extra cost, and for the tool to be sold or offered for sale with a notice stating that it is subject to the provisions of the bill.
In addition, Local Law 144 focuses on isolated decisions that result from the use of AEDTs, but New Jersey’s Assembly Bill is broader in scope, seeking to address bias in the hiring process by mandating that software used for hiring purposes are subject to bias audits and provide information about how the algorithms work and the data used to develop them.
Whilst Local Law 144 does not go quite as far, it does require employers to provide information to employees and applicants with explicit transparency requirements.
The two proposed laws have a similar focus on ensuring fairness in the hiring process and protecting employees from potential discrimination.
Both require employers to conduct regular audits of their automated systems to identify any biases, and to make corrections or adjustments to those systems to prevent any discriminatory practices. However, there are some key differences, including their scope and the type of software that must be audited, as well as their metrics for calculating bias.
We are likely to see a ‘California Effect’, where authorities across the US tend to follow the lead of political jurisdictions with stricter regulations, typically starting with California. It would not be unprecedented for another jurisdiction such as NYC to lead the way. For example, in 2021 Colorado’s Equal Pay for Equal Work Act went into effect, making it the first pay transparency law in the United States. Setting a precedent, with such laws now in effect in 17 states across the country; with California, Washington and Rhode Island enacting their own laws as recently as 1 January 2023.
As such it is becoming clear that a bias audit approach to HR tech regulations signalled by Local Law 144 is likely to be proposed throughout the US, with New Jersey Assembly Bill 4909 as a first example of wider adoption. However, current proposed state-level AI-specific regulation trending more toward mandating audits in their approach with the California Workplace Technology Accountability Act and the Massachusetts Act the leading examples.
With this regulation picking up pace, businesses will soon have to comply with a range of legal requirements. To avoid liability and ensure that algorithms are used safely, businesses will need to take steps early to manage the risks of their AI and procure any necessary interventions, such as bias audits. To find out more about how Holistic AI can help you with this, get in touch at we@holisticai.com.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts