The Evolution of NYC Local Law 144: An Overview of the Key Changes

May 19, 2023
Authored by
Lindsay Levine
Head of Customer Success at Holistic AI
Airlie Hilliard
Senior Researcher at Holistic AI
The Evolution of NYC Local Law 144: An Overview of the Key Changes

The adoption of AI and automation is becoming ubiquitous across sectors, but particularly in the HR sector, where AI-driven and automated tools are increasingly being used in talent management across the employee lifecycle. From evaluating candidates to performance tracking to onboarding, HR professionals are capitalising on the benefits of these tools to increase efficiency, improve candidate experience, and save money. However, with the applications of these tools come risks, such as existing biases being perpetuated and amplified, novel sources of bias, and a lack of transparency about the system’s capabilities and limitations.

What is Local Law 144?

Due to increasing concerns about the risks of using automated tools to make employment decisions, the New York City Council took decisive action, passing a landmark piece of legislation in 2021 known as the Bias Audit Law. This law – Local Law 144 – is set to be enforced from 5 July 2023, after first being pushed back to 15 April 2023 from its initial enforcement date of 1 January 2023.

{{NYC}}

This delay was due to rules being proposed by New York City’s Department of Consumer and Worker Protection (DCWP) to clarify the requirements of the law and supports its implementation enforcement. The first set of proposed rules were published in September 2022 and a public hearing was held on them in November 2022. Due to numerous concerns raised about the effectiveness of these rules, the DCWP then postponed the enforcement date to 15 April 2023 in December 2022 before publishing a second version of the proposed rules shortly after.

Another public hearing was then held on the second version of the rules in January 2023 and the DCWP issued an update in February 2023 stating that it was still working for the large volume of comments. Finally, on 6 April 2023, the DCWP adopted its final version of the rules and announced a final enforcement date of 5 July 2023. In this blog post, we give an overview of the key elements of the rules that have evolved throughout this rulemaking process.

Element of rules Key change
Definition of machine learning, statistical modelling, data analytics, or artificial intelligence Clarified in the adopted rules as computer-based techniques that generate a prediction, for which a computer at least in part identifies the inputs, the relative importance placed on those inputs, and other parameters to improve the accuracy of the prediction or classification. Previous versions of the rules also specified that cross-validation must be used to refine parameters and inputs.
Definition of a simplified output Consistently defined throughout the rules as a prediction or classification that can take the form of a score, tag or categorisation, recommendation, or ranking. It does not refer to the output from analytical tools that translate or transcribe existing text.
Definition of substantially assist decision-making To rely solely on a simplified output, having the simplified output weighted more than any other criterion, or using a simplified output to overrule conclusions from other factors including human decision-making. In the first version of the rules, this also applied to systems used to modify conclusions.
Qualification as an independent auditor Not clarified until the second version of the proposed rules, an independent auditor is clarified as an entity that has not been involved in using, developing, or distributing the AEDT and does not have an employment relationship with the employer seeking to use the AEDT or a financial interest in the AEDT.
Calculating impact ratios The metric for classification systems has been consistent throughout the iterations of the rules: the selection rate (proportion in the positive classification) of each subgroup should be divided by the rate of the subgroup with the highest group.

For regression systems, the first rules put forward a metric based on the average score of subgroups while this was changed in the second rules. First, scores must be binarized based on whether they score above the sample median to calculate the scoring rate, and then the scoring rate for each subgroup is divided by the group with the highest rate.
Summary of results The first rules provided an example of intersectional analysis but the body of the text did not make this requirement explicit. The second version of the rules explicitly called for intersectional and standalone analysis to be included in the summary. The adopted rules also clarified the removal of small sample sizes and missing data.
Historical and test data The concept of using test data when real-life, historical data is not available was introduced in the second version of the rules. This is permitted when historical data is insufficient providing an explanation of the alternative data used is provided. The adopted rules provide examples of the use of historical and test data.

Download a more comprehensive overview of the changes here

Definition of machine learning, statistical modelling, data analytics, or artificial intelligence

Automated employment decision tools are defined by the legislation as a computational process, derived from machine learning, statistical modelling, data analytics, or artificial intelligence that produces a simplified output (a score, classification, or recommendation) used to aid or automate decision-making for employment decisions (screening for promotion or employment).

The first version of the proposed rules provided some clarity on machine learning, statistical modelling, data analytics, or artificial intelligence means a group of computer-based mathematical, computer-based techniques that generate a prediction of a candidate’s fit or likelihood of success or classification based on skills/aptitude. The inputs, predictor importance, and parameters of the model are identified by a computer to improve model accuracy or performance and are refined through cross-validation or by using a train/test split.

This remained consistent throughout the iterations of the rules, except for the point about cross-validation, which was removed from the adopted rules.

Definition of a simplified output

Not defined in the initial text, all three rules use the same definition of a simplified output. This is a prediction or classification as specified in the definition for machine learning, statistical modelling, data analytics, or artificial intelligence. It can take the form of a score, tag or categorization, recommendation, or ranking. This does not include the output from analytical tools that translate or transcribe existing text, e.g., convert a resume from a PDF or transcribe a video or audio interview.

Definition of substantially assist or replace discretionary decision marking

Again, not specified in the initial text of the law, the meaning of a system substantially assisting or replacing discretionary decision-making is something that has sparked debate from stakeholders for being too narrow but has remained relatively constant throughout the iterations of the rules.

It is defined by the first rules as relying solely on a simplified output, having the simplified output weighted more than any other criterion, or using a simplified output to overrule or modify conclusions from other factors including human decision-making. In the second and adopted version of the rules, however, the phrase “or modify conclusions” has been removed.

Qualification as an independent auditor

While the text of the law requires that audits up independent and impartial who qualifies as an independent auditor was not clarified until the second version of the proposed rules. Here, an independent auditor is clarified as an entity that has not been involved in using, developing, or distributing the AEDT and does not have an employment relationship with the employer seeking to use the AEDT or a financial interest in the AEDT. This definition was not changed in the adopted rules.

Calculating impact ratios

The first version of the proposed rules specifies that bias should be determined using impact ratios based on subgroup selection rate (% of individuals in the subgroup that are hired), subgroup average score, or both. Ratios are calculated by dividing the subgroup average score/selection rate by the average score/selection rate of the group with the highest score/rate:


Selection rate for a category

Selection rate of the most selected category
or
Average score of individuals in a category

Average score of individuals in the highest scoring category

The second version of the rules provides a revised calculation for calculating impact ratios for AEDTs that result in a continuous score, where scores are first binarized using pass/fail criteria depending on whether scores are above or below the median score of the sample, termed the scoring rate:


Scoring rate for a category

Scoring rate for the highest scoring category

This metric was unchanged in the final text.

Summary of Results requirements

The first version of the rules provided an example of the impact ratio results that should be included in the Summary of Results. However, the table included only intersectional results although the body of the text did not specify the analysis should be conducted for intersectional groups. The table also included results for groups with very small sample sizes representing less than 1.5% of the data and there was no indication of whether there was any missing data.

The second version of the rules added a second table showing standalone analysis and the body of the text clarified the analysis must be carried out for both standalone and intersectional groups. However, both of these examples only showed the number of applicants in each group for impact ratios calculated for categorical data.

The adopted rules clarify that the number of applicants in each group must also be included for regression systems. Further, an auditor is now permitted to exclude groups with a small sample size, representing less than 2% of the data, from impact ratio calculations providing they include the number of applicants in that category and the scoring rate or selection rate of that category in the results. Additionally, the Summary should include information about the number of data points excluded from the analysis due to missing information.

Using historical and test data

The concept of using historical vs test data to conduct the audit was not addressed until the second version of the proposed rules. Here historical data refers to data collected during the use of the AEDT and test data is any data other than this. The second version of the rules states that a bias audit must use historical data of the AEDT, although test data can be used if this data is insufficient historical data is available to conduct a statistically significant bias audit, test data may be used instead.

If a bias audit uses test data, the summary of results of the bias audit must explain why historical data was not used and describe how the test data used was generated and obtained.

Further, a bias audit of an AEDT used by multiple employers or employment agencies may use the historical data of any employers or employment agencies that use the AEDT. However, an employer or employment agency may rely on a bias audit of an AEDT that uses the historical data of other employers or employment agencies only if it provided historical data from its use of the AEDT to the independent auditor for the bias audit or if it has never used the AEDT.

These guidelines are consistent in the adopted rules, although examples of the use of historical and test data have been added.

{{NYC}}

Compliance with Holistic AI

The 5 July 2023 enforcement date is fast approaching. After this date, employers and employment agencies using AEDTs to evaluate candidates for employment or employees for promotion within New York City must have procured a bias audit and have procedures established for the required notifications. To find out more about Holistic AI’s approach to bias audits, schedule a demo.

Request a NYC Bias Audit Preview

Achieve compliance with the help of an industry-leading independent auditor.

Learn More

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call