Why does HR Tech Need to be Regulated?

August 1, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Why does HR Tech Need to be Regulated?

Around the world, employment decisions are regulated by non-discrimination laws – the UK has the Equality Act of 2010; the US has Title VII of the Civil Rights Act of 1964, the Age Discrimination Act, and Americans with Disabilities Act; and the European Union has Article 21 of the Charter of Fundamental Rights in Europe.

While these laws provide a broad framework for non-discrimination in hiring, there has been a recent push to specifically regulate the use of HR tech for employment decisions. Take, for example, the legislative flurry in the US, encompassing New York City Local Law 144, the Illinois Artificial Intelligence Video Interview Act, and New Jersey AB4909. Broader AI laws, such as the EU AI Act are also targeting HR tech and seeking to impose stringent requirements on its use.

But if employment decisions are already bound by non-discrimination laws regardless of whether they use HR tech, why is HR tech now being specifically targeted? In this blog post, we explore some of the reasons why additional legal requirements are needed for algorithmic tools:

  • Algorithms trained on biased data can perpetuate bias and have an even greater impact than human prejudices.
  • Algorithmic assessment tools can be more complicated to validate, potentially making it more difficult to justify the tool’s use.
  • Algorithms can reduce the explainability of hiring decisions, so disclosure to applicants on the use of automated tools and how they make decisions may be necessary.

Algorithmic models trained on biased data can perpetuate biases

Algorithmic assessments, unsurprisingly, are scored by algorithms. These systems require input data, known as predictors, and a target variable to predict. In the case of a video interview, predictors could be any number of factors – for example, what applicants say, the length and frequency of their pauses, and their tone and pitch. Using this data, the algorithm then needs to issue a prediction, such as an interview rating. This target variable can have any number of sources, including human judgements, with the algorithm trained to predict interview performance ratings provided by human hiring managers.

If the human judgements used to train the models are biased, then it is likely that the algorithm will perpetuate or even amplify these biases, meaning that particular groups could be penalised by the model and rejected at an early stage of the interview process, perhaps before they even have a chance to meet with a human recruiter. Left unchecked, algorithmic recruitment tools could see entire subgroups consistently overlooked for opportunities at a large scale. They can, therefore, have an even more damaging effect than biased human judgements.

However, there is good news too – algorithms can reduce human input. As cognitive biases are exceedingly difficult to overcome, it is therefore arguably easier to mitigate biased algorithims. Bias in algorithmic systems can potentially be minimised by deliberately pursuing equal outcomes during design and development, as well as using machine learning techniques to mitigate biases in the training data.

It is more complicated to validate algorithmic tools

One of the major advantages of algorithmic selection tools is that they are able to use non-traditional data, including answers selected and behaviour while taking a game-based assessment, features extracted from video interviews, or even social media activity. Whereas questionnaire-based assessments are typically developed by teams of experts that curate each item to specifically measure a particular outcome variable (meaning that the assessment has face validity if it appears to measure the particular outcome), the features included as predictors in algorithms do not always have a clear link to the outcome variable.

This means that the features can lack face validity since psychologists are unlikey to be able to explain how something like duration of pauses in a video interview is linked to personality, for example. As such, there can be greater focus on how well the assessment and scoring algorithm predict the target variable, or the assessment’s accuracy, over ensuring that each individual predictor has a clear link to the construct. They should also be tested for job relevancy and how well they predict future performance. This is particularly important if the tool results in biased outcomes since evidence of job-relevance will need to be provided to justify the continued use of the tool.

Algorithmic tools can lack explainability

The introduction of algorithms to score assessments can limit how explainable its computation of scores is. Take, for example, a straight-forward questionnaire-based personality assessment in which respondents indicate how much certain statements correspond with their personality on a scale of 1 to 5. The statements in this assessment will have been created by experts and validated to make sure they measure the intended construct and that there is consistency within the scale. Scores on assessments like this are typically summed to generate a personality score using a scoring key that considers how much respondents agreed with each statement.

However, algorithms can make scoring more complex. Indeed, one of the benefits of using algorithms is that a data-driven approach is taken, wherein algorithms can – in the case of supervised learning – identify patterns in the data to predict specified outcomes. However, some of these patterns might be unintuitive to humans, meaning it can be difficult to explain why predictors are given different weights and how they interact within the model. As a result, it can be more challenging to explain how and why particular decisions were made. Maximising the explainability of algorithmic systems is, therefore, important for ensuring that applicants can make informed decisions about their interactions with the tool and have the means to dispute decisions made by algorithms where needed. Codifying this in law will help to ensure that applicants are consistently informed of the use of the tool, the data it collects, how it makes decisions, and how such decisions will be used.

Regulation of HR Tech is coming

While the use of algorithmic selection tools can streamline processes and improve talent management pipelines for both employees and applicants, targeted regulation of algorithmic hiring tools is crucial to ensure their fair and ethical use. Well-crafted laws for HR tech can mandate disclosures to applicants, minimise bias through auditing, and require proper validation of these automated systems. Regulations specific to HR tech, complementing broader anti-discrimination laws, will uphold fairness and transparency.

Policymakers worldwide are increasingly targeting HR tech, but particularly in the US. With a new wave of requirements imminent, acting early is key for compliance. Holistic AI can help. Schedule a call with our expert compliance team to find out we can help you prepare for impending HR tech regulations.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call