🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Using Python to Mitigate Bias and Discrimination in Machine Learning Models

Authored by
Kleyton da Costa
Machine Learning Researcher at Holistic AI
Published on
Jun 16, 2023
read time
0
min read
share this
Using Python to Mitigate Bias and Discrimination in Machine Learning Models

The importance of mitigating bias in machine learning

Machine learning models can be used in critical applications, such as hiring processes, the judicial system, credit scoring, or facial recognition systems. In these cases, it is essential to ensure that the algorithms used do not discriminate against those they are assessing.

There have been a number of high-profile, real-world instances of AI systems displaying bias and materially impacting particular groups. A 2018 study, for example, found that three popular facial recognition systems appeared to be biased, and incorrectly classified up to 34.7% of Black women compared to a 0.8% error rate for white men.

In 2020, another study analysed facial recognition systems from different companies. The results showed that the models have high accuracy (over 90%), but this accuracy is not homogeneous, presenting a clear racial bias. The accuracy for people with darker skin was in fact up to one-third lower than for people with lighter skin.

The prevalence of AI in systems that have the potential to significantly impact people's lives means there is a genuine risk that the technology could reinforce existing social inequities if a conscious effort is not made to address these issues within machine learning.

Face Recognition Technologies

Responsible AI and bias mitigation

With the use of artificial intelligence technologies now widespread, the field of responsible AI has become even more important. It is crucial that companies/researchers/individuals who use or build AI models take responsibility for the development of these systems. The first step? Understanding what bias is.

Bias can be understood in different ways. Generally, when we talk about biases in machine learning models, we are referring to the model's inability to capture the complexity of the data, i.e., a model that systematically makes incorrect predictions. In this case, a model with high bias has the characteristic of underfitting.

In the context of fair and responsible AI, we define bias as an unwanted prejudice in the decisions made by an AI system that is systematically disadvantageous to a person or group. There are various types of biases, and they can be inadvertently introduced into algorithms at any stage of the development process, whether during data generation or model construction.

To be able to measure whether a system treats different groups equally, we must agree on the following definitions of equality:

  • Equality of outcomes: If we select this definition, we assume that all subgroups have equal outcomes. For example, in a recruitment context, we may require the percentage of hired candidates to be consistent across groups (e.g., we want to hire 5% of all female candidates and 5% of all male candidates).
  • Equality of opportunity: If we select this definition, we assume that all subgroups have equal opportunities. For example, if we have a facial recognition algorithm, we may want the classifier to perform equally well for all ethnicities and genders.

You can learn all about the different metrics for equality of outcomes and opportunities via the resources in the Holstic AI library.

Holistic AI pipeline + Sklearn

Using the HolisticAI library, we can apply dozens of bias mitigation and measurement strategies for machine learning models for classification, regression, clustering, or recommendation systems. All these tasks can be applied using Python. Python is a popular programming language used extensively in data science/machine learning projects due to its simplicity, versatility, and robustness.

Python has a vast collection of libraries and frameworks that facilitate data analysis, data visualization, statistical modelling, and machine learning tasks. In this post we use three libraries: Holistic AI (responsible ai tasks), Scikit-Learn (machine learning models), and Matplotlib (data visualization).

In this introductory tutorial, we will address a specific case: bias mitigation in a classification task using the Exponentiated Gradient Reduction strategy.

Step 1: Read data and preprocessing

In this example, we use the Adult Dataset. We start by loading the data and performing a simplified pre-processing to obtain two groups of interest based on gender: female and male. This way, we will be able to investigate and, if necessary, mitigate the existing bias between the two groups.


# load data
from holisticai.datasets import load_adult
from sklearn.model_selection import train_test_split
import pandas as pd

# read data
data = load_adult()
df = pd.concat([data["data"], data["target"]], axis=1)
protected_variables = ["sex", "race"]
output_variable = ["class"]

# simplified preprocessing
y = df[output_variable].replace({">50K": 1, "<=50K": 0})
X = pd.get_dummies(df.drop(protected_variables + output_variable, axis=1))

# split group_a = female and group_b = male
group = ["sex"]
group_a = df[group] == "Female"
group_b = df[group] == "Male"
data_ = [X, y, group_a, group_b]

# split data in train and test sets
dataset = train_test_split(*data_, test_size=0.2, shuffle=True)
train_data = dataset[::2]
test_data = dataset[1::2]

Step 2: Data description and analysis


from holisticai.bias.plots import group_pie_plot
from holisticai.bias.plots import histogram_plot

import matplotlib.pyplot as plt

fig, axs = plt.subplots(nrows = 1, ncols = 2, figsize = (15,5))

# attributes "sex" and "class"
p_attr_sex = df['sex']
p_attr_class = df['class']

# pie plot for attribute "sex"
group_pie_plot(p_attr_sex, ax = axs[0])

# pie plot for attribute "class"
group_pie_plot(p_attr_class, ax = axs[1])
Data Description and Analysis: Group - Female and Male

# histogram by "race" and colored by "class"
histogram_plot(df['race'], df['class'])
Data Description and Analysis: Histogram Plot

In the figure above we can see that the dataset is unbalanced, with a significantly larger representation of men and individuals belonging to the race attribute "white". This can lead to biased results and undermine the ability to generalise any findings or models trained with this data.

Step 3: Training baseline model


from holisticai.bias.metrics import classification_bias_metrics
from holisticai.pipeline import Pipeline

from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression

# setup Pipeline
pipeline = Pipeline(steps=[
   ('scaler', StandardScaler()),
   ('classifier', LogisticRegression()),
   ])

# train model with Pipeline
X_train, y_train, group_a, group_b = train_data
pipeline.fit(X_train, y_train)

# test model with Pipeline
X_test, y_test, group_a, group_b = test_data
y_pred = pipeline.predict(X_test)

# baseline metrics for equality of outcomes and equality of opportunity
metrics_baseline = classification_bias_metrics(group_a, group_b, y_pred, y_test, metric_type='both')

Step 4: Training mitigated model


# import mitigation strategy
from holisticai.bias.mitigation import ExponentiatedGradientReduction

# select model
model = LogisticRegression()

# select mitigation strategy
inprocessing_model = ExponentiatedGradientReduction(
                             constraints="DemographicParity",
                            verbose=1
                     ).transform_estimator(model)

# setup Pipeline
pipeline = Pipeline(
   steps=[
       ('scalar', StandardScaler()),
       ("bm_inprocessing", inprocessing_model),
   ]
)

# train model with mitigator in Pipeline
X_train, y_train, group_a, group_b = train_data
fit_params = {
   "bm__group_a": group_a,
   "bm__group_b": group_b
}
pipeline.fit(X_train, y_train, **fit_params)

# test model with mitigator in Pipeline
X_test, y_test, group_a, group_b = test_data
predict_params = {
   "bm__group_a": group_a,
   "bm__group_b": group_b,
}
y_pred = pipeline.predict(X_test, **predict_params)

# mitigated metrics for equality of outcomes and equality of opportunity
metrics_mitigated = classification_bias_metrics(group_a, group_b, y_pred, y_test, metric_type='both')

Step 5: Comparing the results


results = pd.concat([metrics_baseline['Value'], metrics_mitigated[['Value', 'Reference']]], axis = 1)
results.columns = ['Baseline', 'Mitigated', 'Reference']

results
Comparing the Results

The “Reference” column shows what values each metric considers fair. For example, a Disparate Impact of 1 indicates that the results are fair and unbiased. Without the mitigator (Baseline), the metric result is 0.2960, far from what would be considered fair. Applying the mitigator reduces this value to 0.9712, meaning the result is fair.

The comparative result shows that the mitigation strategy was effective in mitigating the existing biases between groups A (female) and B (male).

Summary

Implementing techniques such as data preprocessing and fairness metrics can ensure that AI models do not perpetuate unfair biases against certain groups. Using Python's powerful libraries and frameworks, we can easily incorporate these steps into machine learning pipelines and automate the process of identifying and addressing bias in our models. As machine learning's role in our daily lives continues to increase, it is vital that we take steps to employ these safeguards to eliminate bias and create a fairer AI future.

To explore more techniques for measuring and mitigating bias, visit the Holistic AI library, an open-source resource designed to improve the trustworthiness and transparency of AI systems.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo