Following an announcement that the enforcement date for NYC Local Law 144 (the NYC Bias Audit Law) has been postponed from 1 January 2023 to 5 July 2023, New York City’s Department of Consumer and Worker Protection (DCWP) held a second public hearing on 23 January 2023 to provide an opportunity for public comment on its updated proposed rules. Although the DCWP did comment on the testimonies shared during the hearing (download Holistic AI’s submission here), attendees raised some crucial questions about the scope of the legislation. In this article, we summarise the key takeaways from the hearing.
The first version of the proposed rules seemingly created a loophole. For audits to be conducted internally, they specified that auditors should not be involved in using or developing the tool, meaning that internal teams not involved in these activities could perform the audit. However, amid concerns raised about the robustness of internal audit during the first hearing, this statement was amended in a revised version of the rules to specify that the auditor cannot be someone who has an employment relationship with or has a direct or indirect financial interest in the employer or employment agency. In short, auditors must be independent third parties. During the hearing, there was overwhelming support for this update, with speakers highlighting the need for impartiality while conducting audits. However, one attendee did claim that internal parties could support auditing efforts, a view not supported by other speakers.
While the law is very much targeted at the bias risk vertical, the need to give candidates or employees notice about the use of the tool and the requirement to make a summary of results publicly available increases transparency, something that attendees showed much support for. However, there was also an emphasis that bias and transparency are minimum requirements and that audits could also go beyond these risk verticals to examine efficacy, for example. Indeed, since automated employment decision tools are used to make high-stakes decisions that have significant implications for both employers/employment agencies and candidates/employees, audits could also examine how well the tool measures the construct it claims to or predicts job performance. This would ensure that tools are effective and increase confidence in their use.
In addition to calling for the scope of audits to be widened, a shared concern of almost all given testimonies was that the definition of AEDTs in the updated rules is too narrow. Here, an AEDT is a tool that substantially assists or replaces human decision-making, where a simplified output is solely relied on, weighted more than any other output, or is used to overrule human decision-making.
Given that these systems are often used as part of a more extensive selection or promotion process and are rarely the only factor that is considered when making decisions, these narrow specifications will create loopholes and enable bad-faith actors to argue that they are not within the scope of the legislation, meaning that the impact of the law could be unnecessarily limited. It is also important to note that human decision-making can often be biased, so systems with human reviewers may not be any less biased and will, therefore, still require an audit.
Moreover, the law specifies that AEDTs are computational processes derived from machine learning, data analytics, and artificial intelligence. Furthermore, the updated rules specify that these technologies are used to generate predictions or classifications. A computer identifies the inputs, feature importance, and parameters, and the inputs are refined by cross-validation or by using training and testing data.
However, attendees argued that this also limits the scope of the legislation; while cross-validation is a widely used way to refine models and ensure that they are generalisable beyond the training data, not all machine learning, data analytics or artificial intelligence systems will use cross-validation. This could therefore create another loophole where bad faith actors could argue that their system does not fall within the scope of the legislation if it does not use cross-validation. Given that the spirit of the legislation is to regulate all automated employment decision systems, regardless of the sophistication of the technology and models they use, these definitions will make the systems governed by this law much narrower than intended.
Local Law 144 is a landmark piece of legislation and is paving the way for more effective and safer AEDTs, and its mission is widely supported. However, given the concerns raised during the hearing and the points of contention we raised in our submission, there is a potential for a third version of the rules to be released before the law goes into effect to ensure that it is something that can be practically enforced and that there are no loopholes to avoid compliance.
To find out how Holistic AI can help you get compliant with Local Law 144, get in touch at we@holisticai.com
DISCLAIMER: This news article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts