🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

What is Bias and How Can it be Mitigated?

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Aug 30, 2022
read time
0
min read
share this
What is Bias and How Can it be Mitigated?

What is Bias?

Bias refers to unjustified differences in outcomes for different subgroups. To contextualise this, bias in recruitment could take the form of white candidates being hired at a greater rate than non-white when race is not related to job requirements, and bias in credit scoring could result in males being given a higher score than females when factors such as payment defaults and education are controlled for.

As humans, we can be biased whether we are aware of it or not. Unconscious bias refers to implicit associations that we are not aware of that cause us to favour one group over another. For example, a candidate’s name might influence a hiring manager’s opinions about an applicant, even if they are not actively seeking out a hire with specific characteristics. We can also be consciously biased, making decisions based on protected characteristics rather than merit. For example, a hiring manager might actively seek a male applicant for a leadership position.

Algorithms can also be biased, but not in the same way as humans. Algorithms identify patterns in data, even if these patterns are not intuitive or recognised by humans. Because of this, algorithms can reflect and amplify biases in the data they were trained on. This means that they can display the same biases as humans, but not for the same reasons.

Sources of bias in algorithms

Using algorithms to make decisions can introduce unique sources of bias. Some of these sources are:

  • Human biases – if algorithms are trained based on prior human decisions, and these decisions are biased, then the algorithm will reflect these human biases.
  • Unbalanced training data – if the algorithm is trained on data that is dominated by particular subgroups, the algorithm can be biased against the subgroups that are underrepresented.
  • Differential feature use – if an algorithm uses different features to evaluate the performance of different groups, the algorithm itself can be said to be biased and can result in biased outcomes.
  • Proxy variables - even if protected attributes are not used by an algorithm to make a decision, proxy variables can represent these characteristics. For example, zip code can be used as a proxy for race.

Bias mitigation strategies

The appropriate mitigation strategies depend on the source of bias, and they often require technical expertise to implement. However, some ways to mitigate bias are:

  • Obtaining additional data – create a more balanced dataset or gather data from multiple sources to reduce the effect of imbalanced data or human biases
  • Adjusting the hyperparameters of the model – introduce or increase the regularisation of the model to change the way the model fits the data and reduce bias
  • Removing or reweighing features - if a feature has a high correlation with a protected attribute, this can mean that it is acting as a proxy variable. Removing such features or reweighing them to have a smaller influence can help to mitigate bias

Bias Audits

Bias audits of automated employment decision tools will soon be required under legislation passed by the New York City Council, meaning that any employer using an automated decision tool to evaluate a candidate residing in New York City must commission an audit. These audits must be carried out by an impartial third-party auditor who has the relevant expertise to enable them to examine the algorithm and its outputs for bias against protected groups.

While NYC is the first jurisdiction to mandate bias audits and they are only in relation to automated decision tools used in the context of recruitment, legislation in Colorado prohibits insurance providers from using data and algorithms that result in unjustified discrimination, or bias, against protected groups. Bias audits can also contribute to the risk management of algorithmic systems, which is particularly important for systems that are considered high risk since they will be required to have risk management strategies under the forthcoming EU AI Act. Opting to get bias audits and implementing risk management strategies for your AI systems can empower you to adopt AI with confidence.

Gain 360-Degree AI Oversight
with Holistic AI Governance Platform

Govern AI to continue scaling and adoption

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo