🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
→
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
→
Learn more about EU AI Act

Key Takeaways from the SHRM Event on the Legal and Practical Implications of Using AI in Hiring

Authored by
No items found.
Published on
Mar 24, 2023
share this
Key Takeaways from the SHRM Event on the Legal and Practical Implications of Using AI in Hiring

On 23 March 2023, the Society for Human Resource Management (SHRM) held an event sponsored by the Society for Industrial and Organizational Psychology (SIOP) on Exploring the Legal and Practical Implications of Using AI-Based Assessments in Hiring. The panel, moderated by Principal of the Nancy T. Tippins Group, Nancy Tippins, was comprised of Equal Employment Opportunity Commission (EEOC) Commissioner, Keith Sonderling; Vice President of the Employment and Litigation Services Division at DCI Consulting Group, Dr Eric Dunleavy; and Assistant Vice President, Organizational Assessment and Development at AT&T, Dr Seth Zimmer. The panel discussed guidelines on how to evaluate and implement AI-based tools for recruitment and legal and ethical implications of using AI-based assessments in hiring practices. In this article, we summarise some of the key themes that emerged from the event.

Compliance with Federal EEO laws

Commissioner Keith Sonderling kicked off the panel with a discussion about different forms of AI-driven assessments, including tools for interviewing, predicting acceptance of job offers, and measuring sentiment. Highlighting SHRM’s finding that 1 in 4 organisations are using AI in their hiring practices and noting that AI is involved in every stage of the recruitment funnel, Commissioner Sonderling commented on the increasingly widespread use of AI in HR. While he noted that AI can potentially help to mitigate unconscious and human biases and increase transparency about how employment decisions are made, comparing a human brain to a black-box, he also emphasised that algorithms are only as good as they are trained on; biased training data is likely to result in a biased algorithm. However, there are a number of ways that biased models can be mitigated.

When used correctly, AI can increase workforce diversity from the very beginning of the recruitment funnel, with AI-driven tools to create job descriptions being able to identify patterns of language that are more likely to attract particular demographic groups and that are more inclusive. However, he also stressed the importance of continuously monitoring AI systems – they are self-reinforcing so a “set it and forget it” approach cannot be used. After each major update or change to the training data, it is important that the outcomes of the model for different groups are compared to identify and mitigate any instances of bias.

Indeed, although much of the regulation targeting employment decisions is over half a century old, they are still applicable to AI driven employment tools. Ultimately, the liability for compliance with these laws lies with the employer, not the vendor of the tool, so Commissioner Sonderling expressed the importance of employers carrying out their due diligence. They should actively try to understand how they developed the tool, who tests the model and which metrics are used, whether the model is retested when the system is updated, and how these efforts will apply to their own employment decisions. With AI-driven employment tools being a key area of focus in the EEOC’s Strategic Enforcement Plan for 2023-2027, it is more important than ever that employers do their due diligence and ensure they are compliant with equal opportunity laws when using AI-driven assessment tools.

Practical challenges in using AI-based assessments

Following Commissioner Sonderling’s presentation, Dr Seth Zimmer then spoke about the key challenges that practitioners using and developing AI-based assessments may face. He grouped these into three main categories: fairness, measurement, and compliance.

He also grouped fairness into three main concerns: whether there is fair treatment of test-takers in terms of consistency of test items and scoring algorithms, access to technology, availability of preparation materials, and opportunities to re-take assessments; the use of potentially irrelevant factors such as social media presence, personal and protected characteristics, and off the wall factors such as taking a particular class; and test-taker perceptions of tools. While all three concerns are important to take into consideration, the need to measure job-relevant characteristics is highlighted by an investigation into video interview provider Retorio, where it was discovered that factors such as wearing glasses or having a bookshelf in the background of video interviews influenced personality scores, despite not being job-relevant factors.

In terms of measurement, Dr Zimmer covered challenges such as lack of explainability with black-box models, what the model predicts and whether it is job-relevant, and what validity evidence exists about the tool. This is closely linked to compliance challenges, where in order to comply with professional guidance and legal requirements, the validity of the tool may need to be supported by relevant evidence. He also spoke of HR-tech relevant regulations and compliance with the Uniform Guidelines on Employee Selection Procedures.

Indeed, HR tech is increasingly being targeted by policymakers, particularly in the US. There are several upcoming laws that will impose additional obligations on employers using HR tech, with many of these employers likely to look to their vendors for support with compliance. However, as highlighted by Commissioner Sonderling and Dr Zimmer, it is the employer that is ultimately liable for compliance, meaning employers procuring and using these tools do their due diligence and understand the tools capabilities, limitations, and development process.

Challenges in complying with the Uniform Guidelines on Employee Selection Procedures

Dr Eric Dunleavy closed the session by providing an overview of the Uniform Guidelines on Employee Selection Procedures, which were jointly developed by the EEOC, Civil Service Commission, the Department of Labor, and the Department of Justice and published in 1978. These Guidelines were developed to create a blueprint for complying with Title VII of the US Civil Rights Act of 1964, which prohibits employment discrimination based on race, color, religion, sex, and national origin, and Executive Order 11246, which prohibits federal contractors from discriminating in employment decisions on the basis of race, color, religion, sex, sexual orientation, gender identity or national origin.

Published 45 years ago, Dr Dunleavy highlighted the lack of up-to-datedness of these Guidelines and signposted resources available to practitioners in the field, including SIOP’s Principles for the Validation and Use of Employee Selection Procedures and the Standards for Educational and Psychological Testing, both of which are aligned and reflect current science and best practices.

While outdated, the Uniform Guidelines remain important to those using AI in their hiring practices in that they will have to prove that the tool does not result in adverse impact, or differential hiring rates for different subgroups. In the event that a procedure does result in adverse impact, the employer must then justify that the procedure measures a job-related construct. There could also be the opportunity for a plaintiff to suggest alternative processes that result in less adverse impact that could have been used instead.

These procedures also apply to AI-driven assessments, where the validity of the assessment may need to be proven in terms of criterion-related validity (how well performance on the measure relates to job outcomes), content validity (whether the content of the procedure and job are aligned), and construct validity (how the procedure relates to other measures of the same construct).  Dr Dunleavy therefore stressed the importance of using job analysis to identify relevant responsibilities and skills required by the position and using that to inform the design and development of AI-driven tools, as well as clearly defining the criterion of interest, significance testing, and regularly testing for adverse impact, particularly when the assessment or scoring model is updated.

Becoming compliant

The use of AI and other automated and algorithmic tools in recruitment will soon be even more strictly regulated than traditional hiring practices, with policymakers across the US and EU introducing legislation that will have important implications for employers across the world using these tools. The best way to prepare for these laws is to act early. As the panellists stressed, it is the employer that is ultimately liable for the use of automated tools in employment decisions, highlighting the importance of doing due diligence when using a tool from a vendor. To find out more about how Holistic AI can help you comply with upcoming AI regulation, get in touch at we@holisticai.com.

Download our comments here

DISCLAIMER: This news article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo