Bias refers to unjustified differences in outcomes for different subgroups. To contextualise this, bias in recruitment could take the form of white candidates being hired at a greater rate than non-white when race is not related to job requirements, and bias in credit scoring could result in males being given a higher score than females when factors such as payment defaults and education are controlled for.
As humans, we can be biased whether we are aware of it or not. Unconscious bias refers to implicit associations that we are not aware of that cause us to favour one group over another. For example, a candidate’s name might influence a hiring manager’s opinions about an applicant, even if they are not actively seeking out a hire with specific characteristics. We can also be consciously biased, making decisions based on protected characteristics rather than merit. For example, a hiring manager might actively seek a male applicant for a leadership position.
Algorithms can also be biased, but not in the same way as humans. Algorithms identify patterns in data, even if these patterns are not intuitive or recognised by humans. Because of this, algorithms can reflect and amplify biases in the data they were trained on. This means that they can display the same biases as humans, but not for the same reasons.
Using algorithms to make decisions can introduce unique sources of bias. Some of these sources are:
The appropriate mitigation strategies depend on the source of bias, and they often require technical expertise to implement. However, some ways to mitigate bias are:
Bias audits of automated employment decision tools will soon be required under legislation passed by the New York City Council, meaning that any employer using an automated decision tool to evaluate a candidate residing in New York City must commission an audit. These audits must be carried out by an impartial third-party auditor who has the relevant expertise to enable them to examine the algorithm and its outputs for bias against protected groups.
While NYC is the first jurisdiction to mandate bias audits and they are only in relation to automated decision tools used in the context of recruitment, legislation in Colorado prohibits insurance providers from using data and algorithms that result in unjustified discrimination, or bias, against protected groups. Bias audits can also contribute to the risk management of algorithmic systems, which is particularly important for systems that are considered high risk since they will be required to have risk management strategies under the forthcoming EU AI Act. Opting to get bias audits and implementing risk management strategies for your AI systems can empower you to adopt AI with confidence.
Govern AI to continue scaling and adoption
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts