Machine learning models have become a part of our day-to-day life, increasingly being used to make predictions and decisions, classify images or text, recognise objects, and recommend products, for example.
While one of the main objectives of those developing machine learning models is maximising their accuracy or efficacy, it is also important to consider the limitations and challenges of these models, especially in relation to bias. Bias occurs when a model has different outcomes for different subgroups, something that can result from various factors at different stages of the model's development. Although mitigating bias is key to unlocking value responsibly and equitably, since the nature of bias in these systems is not simply technical, addressing the problem involves assessing the models with a variety of metrics which can be used to improve the results produced and reduce bias.
In this blog post, we use the COMPAS tool as an example of a biased system and illustrate how bias can be measured using Holistic AI’s open-source library.
Probably one of the most well-known cases of bias in an automated system is Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool, which used a machine learning model as a decision support tool to score the likelihood of recidivism. To make predictions, the algorithm was fed different information about the individuals, including age, sex, and criminal history for instance. Based on this information, the algorithm assigned the defendants scores from 1 to 10, indicating how likely they are to re-offend. However, an investigation into the tool found that the model was biased against black people, assigning them a higher probability in contrast with white people.
This situation drew attention to the urgent need to improve the equity of predictions and decisions made by machine learning models due to the significant impact on individuals' lives these systems have the potential to influence. The case is considered a key motivator in the creation of tools to aid developers in identifying and addressing bias in the development of AI models. So far, a number of metrics have been proposed to measure the fairness of a model, for instance, statistical parity or disparate impact.
One example of these assessment tools is the open-source library built by Holistic AI, created to assess and improve the trustworthiness of AI systems through a set of techniques to measure and mitigate bias intuitively and easily. Specific to this library is the compatibility and similar syntax to the well-known Sklearn library, thereby the user in most cases only needs to separate the protected group from the training dataset and then follow the traditional pipeline to fit the model and predict the outcomes, as we will see later.
For this example, we will use the holisticai library to address the bias problem in a pre-processed COMPAS dataset, which can be found here.
First, we simply need to install the library into our python environment using the following command:
This version of the COMPAS dataset can be loaded and explored from our working directory using the pandas package:
As we can see above, this dataset is composed of 12 features, where the outcome is the 'Two_yr_Recidivism' column, which indicates whether a person commits a crime in the following two years or not. The remaining columns include information about the offender’s criminal record, ethnicity, and sex, for example Moreover, there are no missing values in the dataset, so we do not need to take any additional action for that.
For our purpose, to analyse bias in the model, in this example, we will select the 'Hispanic' column as our protected attribute, but feel free to select any column that you want to analyse. As can be seen below, values in the Hispanic column are 0 and 1, where 0 represents that the offender is not Hispanic and 1 represents that they are Hispanic.
We can use plots from the holisticai library to observe the proportions of the data and then perform a quick exploration.
As we can see, Hispanic people (labelled as 1) only represent 8% of the complete group.
Using the frequency_plot function, we can also observe that the zero (0) group (non-Hispanics) has a much higher pass rate than the 1 group (Hispanic). In other words, more non-Hispanics reoffended within 2 years than Hispanics.
We will begin training the model in a traditional way, without considering the influence of any protected attribute and then we will calculate some fairness metrics to assess the predictions of the model.
For this example, we use a traditional logistic regression model.
Now, we can calculate the metrics with the predicted outcomes:
We obtained the values above, which are not bad but could be improved with further optimisation.
Now we need to measure the bias presented in the model with respect to the protected attribute. To calculate the bias of the model, the holisticai library contains a range of useful metrics. To use these functions, we only need to separate the protected attribute from the data and use the predictions with the expected outcomes.
The library includes an interesting function classification_bias_metrics that computes a range of relevant classification bias metrics such as Statistical parity, Disparate impact, and so on, and displays them as a chart where fair reference values are included for comparison. The function allows us to select the metrics we want to calculate by specifying equal_outcome, equal_opportunity or both. For this example, we will calculate all the metrics, and then pass both as the value in the metric_type parameter.
These metrics help us to determine whether the model is biased or not. For example, for the statistical parity metric, values lower than -0.1 or higher than 0.1 indicate bias. For the disparate impact, values lower than 0.8 or higher than 1.2 indicate bias. As we can see, the library presents us with not only the calculated values for the fairness metrics but also the reference values which indicate an ideal debiased AI model. Therefore, the closer the values are to the reference, the fairer our model is.
Given the values from this table, we can clearly observe that the model is biased against Hispanics, who are predicted to re-offend at a higher rate than non-Hispanics. The remaining metrics also provide us with interesting information, with both the Four-Fifths rule, which is widely used in selection and indicates the presence of different outcomes for different subgroups and the Equality of Opportunity Difference, which indicates the difference between the true positive rates of privileged and unprivileged groups, both being violated and indicating bias. You can find more details of the metrics in the reference of the library.
Through this tutorial, we explored the holisticai library, which allows us to measure the bias present in AI models, in the example we used the function classification_bias_metrics to do so, but this library possesses different functions to measure the bias not only for binary classification but also for other types that you can test, you can find them in the reference of the library including extra examples
If you want to follow this tutorial for yourself, you can do so here.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts