The adoption of AI and automation is becoming ubiquitous across sectors, but particularly in the HR sector, where AI-driven and automated tools are increasingly being used in talent management across the employee lifecycle. From evaluating candidates to performance tracking to onboarding, HR professionals are capitalising on the benefits of these tools to increase efficiency, improve candidate experience, and save money. However, with the applications of these tools come risks, such as existing biases being perpetuated and amplified, novel sources of bias, and a lack of transparency about the system’s capabilities and limitations.
Due to increasing concerns about the risks of using automated tools to make employment decisions, the New York City Council took decisive action, passing a landmark piece of legislation in 2021 known as the Bias Audit Law. This law – Local Law 144 – is set to be enforced from 5 July 2023, after first being pushed back to 15 April 2023 from its initial enforcement date of 1 January 2023.
This delay was due to rules being proposed by New York City’s Department of Consumer and Worker Protection (DCWP) to clarify the requirements of the law and supports its implementation enforcement. The first set of proposed rules were published in September 2022 and a public hearing was held on them in November 2022. Due to numerous concerns raised about the effectiveness of these rules, the DCWP then postponed the enforcement date to 15 April 2023 in December 2022 before publishing a second version of the proposed rules shortly after.
Another public hearing was then held on the second version of the rules in January 2023 and the DCWP issued an update in February 2023 stating that it was still working for the large volume of comments. Finally, on 6 April 2023, the DCWP adopted its final version of the rules and announced a final enforcement date of 5 July 2023. In this blog post, we give an overview of the key elements of the rules that have evolved throughout this rulemaking process.
Automated employment decision tools are defined by the legislation as a computational process, derived from machine learning, statistical modelling, data analytics, or artificial intelligence that produces a simplified output (a score, classification, or recommendation) used to aid or automate decision-making for employment decisions (screening for promotion or employment).
The first version of the proposed rules provided some clarity on machine learning, statistical modelling, data analytics, or artificial intelligence means a group of computer-based mathematical, computer-based techniques that generate a prediction of a candidate’s fit or likelihood of success or classification based on skills/aptitude. The inputs, predictor importance, and parameters of the model are identified by a computer to improve model accuracy or performance and are refined through cross-validation or by using a train/test split.
This remained consistent throughout the iterations of the rules, except for the point about cross-validation, which was removed from the adopted rules.
Not defined in the initial text, all three rules use the same definition of a simplified output. This is a prediction or classification as specified in the definition for machine learning, statistical modelling, data analytics, or artificial intelligence. It can take the form of a score, tag or categorization, recommendation, or ranking. This does not include the output from analytical tools that translate or transcribe existing text, e.g., convert a resume from a PDF or transcribe a video or audio interview.
Again, not specified in the initial text of the law, the meaning of a system substantially assisting or replacing discretionary decision-making is something that has sparked debate from stakeholders for being too narrow but has remained relatively constant throughout the iterations of the rules.
It is defined by the first rules as relying solely on a simplified output, having the simplified output weighted more than any other criterion, or using a simplified output to overrule or modify conclusions from other factors including human decision-making. In the second and adopted version of the rules, however, the phrase “or modify conclusions” has been removed.
While the text of the law requires that audits up independent and impartial who qualifies as an independent auditor was not clarified until the second version of the proposed rules. Here, an independent auditor is clarified as an entity that has not been involved in using, developing, or distributing the AEDT and does not have an employment relationship with the employer seeking to use the AEDT or a financial interest in the AEDT. This definition was not changed in the adopted rules.
The first version of the proposed rules specifies that bias should be determined using impact ratios based on subgroup selection rate (% of individuals in the subgroup that are hired), subgroup average score, or both. Ratios are calculated by dividing the subgroup average score/selection rate by the average score/selection rate of the group with the highest score/rate:
The second version of the rules provides a revised calculation for calculating impact ratios for AEDTs that result in a continuous score, where scores are first binarized using pass/fail criteria depending on whether scores are above or below the median score of the sample, termed the scoring rate:
This metric was unchanged in the final text.
The first version of the rules provided an example of the impact ratio results that should be included in the Summary of Results. However, the table included only intersectional results although the body of the text did not specify the analysis should be conducted for intersectional groups. The table also included results for groups with very small sample sizes representing less than 1.5% of the data and there was no indication of whether there was any missing data.
The second version of the rules added a second table showing standalone analysis and the body of the text clarified the analysis must be carried out for both standalone and intersectional groups. However, both of these examples only showed the number of applicants in each group for impact ratios calculated for categorical data.
The adopted rules clarify that the number of applicants in each group must also be included for regression systems. Further, an auditor is now permitted to exclude groups with a small sample size, representing less than 2% of the data, from impact ratio calculations providing they include the number of applicants in that category and the scoring rate or selection rate of that category in the results. Additionally, the Summary should include information about the number of data points excluded from the analysis due to missing information.
The concept of using historical vs test data to conduct the audit was not addressed until the second version of the proposed rules. Here historical data refers to data collected during the use of the AEDT and test data is any data other than this. The second version of the rules states that a bias audit must use historical data of the AEDT, although test data can be used if this data is insufficient historical data is available to conduct a statistically significant bias audit, test data may be used instead.
If a bias audit uses test data, the summary of results of the bias audit must explain why historical data was not used and describe how the test data used was generated and obtained.
Further, a bias audit of an AEDT used by multiple employers or employment agencies may use the historical data of any employers or employment agencies that use the AEDT. However, an employer or employment agency may rely on a bias audit of an AEDT that uses the historical data of other employers or employment agencies only if it provided historical data from its use of the AEDT to the independent auditor for the bias audit or if it has never used the AEDT.
These guidelines are consistent in the adopted rules, although examples of the use of historical and test data have been added.
The 5 July 2023 enforcement date is fast approaching. After this date, employers and employment agencies using AEDTs to evaluate candidates for employment or employees for promotion within New York City must have procured a bias audit and have procedures established for the required notifications. To find out more about Holistic AI’s approach to bias audits, schedule a demo.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts