Artificial intelligence (AI) is increasingly present in our lives and becoming a fundamental part of many systems and applications. However, like any technology, it is important to ensure that AI-based solutions are trustworthy and fair. That's where the Holistic AI library comes in.
The Holistic AI library is an open-source tool that contains metrics and mitigation strategies to make AI systems safer. Currently, the library offers a set of techniques to easily measure and mitigate Bias across numerous tasks and includes graphics to visualise the analysis. In the future, it will be extended to include tools for Efficacy, Robustness, Privacy and Explainability as well. This will allow a comprehensive and holistic assessment of AI systems.
The advantages of using the Holistic AI library include:
In this blog post, we provide an overview of Holistic AIโs Bias analysis and mitigation framework, defining bias and how it can be mitigated before giving an overview of the bias metrics and mitigations available in the Holistic AI library.
Bias in data can alter our perception of the world, leading to incomplete or inaccurate conclusions. It arises from consistent errors, such as inappropriate sampling or data collection tools, and personal beliefs that influence how we interpret results. To ensure fair and reliable data, it's crucial to detect and address bias, particularly in decision-making or machine learning.
The advances in the use of artificial intelligence systems bring numerous possibilities and challenges to the field of bias evaluation. It is necessary for end users (governments, companies, consumers, etc.) to have confidence that the results generated by this type of technology will not reproduce the prejudices and discriminatory behaviours observed in society at large, since they can be transferred to the data. Through bias metrics, we can measure whether a data set is unbalanced for a particular race, gender, sexual orientation, religious, age, salary, etc.
To demonstrate a bit of what can be developed to measure bias, let's do a case study with the UCLโs Adult dataset. This dataset is widely used in machine learning exercises and is suitable for applying bias metrics. The dataset has categorical features (Job role, education, marital-status, occupation, relationship, race, sex, and native-country) and integer features (age, years of study, capital gain, capital loss, and work hours per week). The prediction task is to determine whether a person makes over 50K a year (a classification task feature).
Two important pieces of information can be observed about this dataset. The first point is the imbalance observed between the number of men and women. The pie chart shows that 67% of observations in the dataset are men and only 33% are women, that is, one-third of the database contains information related to men. Thus, we have a clear visualisation of the participation of men and women in the dataset.
On the other hand, the comparison between the age distribution chart among people who earned more or less than 50K in the year shows that the average age of people who earned more than 50K is around 44, while the group of people who earned less than 50K has an average of 36 years old. In this sense, we can say that for the analysed dataset, people who earn more than 50K have, on average, a higher age than people who earn less than 50K. It is reasonable to imagine that older people have higher incomes associated with experience.
It's worth noting that creating this type of visualisation with the Holistic AI library is super simple. You just need to use the group_pie_plot and histogram_plot functions.
In addition, we can analyse bias in the dataset in a simple and objective way. For example, we can generate results for five bias metrics (you can learn more about bias metrics in this Roadmaps for Risk Mitigation) with just three lines of code and thus measure whether the predictions made by a Machine Learning model have gender biases. In this case, we use the classification_bias_metrics function, but there are functions in the Holistic AI library for various other problems. In this case, the Four Fifths Rule metric lesser than 0.8 indicate a higher bias in favour of group_a.
Bias in AI can be addressed at different stages of the model life cycle, and the choice of mitigation strategy depends on factors such as data access and model parameters. The Holistic AI library offers three approaches for mitigating bias: pre-processing, in-processing, and post-processing. Pre-processing approaches transform the data before it is fed into the model, in-processing modifies the algorithm without changing the input data, and post-processing adjusts the outputs of the model.
These strategies help to improve the fairness, and trustworthiness of AI systems and can be applied to a variety of model types such as binary classification, multiclassification, regression, clustering, and recommender systems. For example, a pre-processing approach known as reweighing adjusts the importance of datapoints to mitigate bias. On the other hand, adversarial training can be used in-processing to adjust predictors associated with bias, and calibration can be used post-processing to ensure that positive outcomes are more evenly distributed across subgroups. An overview of the mitigation strategies in the Holistic AI library and the models they are suitable for can be seen in the table below.
For instance, for the Adult dataset, if we are at a stage where we have access to the training data set, we can employ reweighing pre-processing to guide the model training or we can conduct a more exploratory search into the importance of each example using in-processing Grid Search. And if retraining the model is not an option, post-processing techniques like Calibrated Equalized Odds can still be used to improve fairness. The best part? With the Holistic AI library, testing out these variants can be done with minimal lines of code - making it super simple to use.
This allows us to perform rapid analyses and even experiment with integrating pre- and post-processing strategies in the same pipeline. Then, we can compare all our results using the Holistic AI metric functions:
We can go deeper and try more strategies, testing better parameters settings and then visualise our results. Holistic AI has several visualisation methods to improve your analysis and bias mitigation results. For the Adult Dataset (a classification problem), using pre-processing reweighing, below are some of the visualisations you can create using the Holistic AI library.
You can find the full tutorial for this here.
Holistic AIโs library is a valuable tool for ensuring the reliability and fairness of AI systems. With its easy-to-use interface and graphics to analyse bias, the library offers a comprehensive approach to AI assessment. If you are interested in ensuring the quality of your AI-based solutions, you should look at the Holistic AI library.
There are several metrics that can be used to measure bias depending on the type of model being used. The Holistic AI library offers a range of metrics that are suitable for these different systems, as can be seen in the table below.
There are several strategies that can be used to mitigate bias depending on the type of model being used:
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts