🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Why Do We Need AI Auditing and Assurance?

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Sep 25, 2022
read time
0
min read
share this
Why Do We Need AI Auditing and Assurance?

Although algorithms and the resulting automation can bring about a number of benefits, including being able to perform at greater accuracy and speed than humans are capable of, the use of these technologies can also pose risks. Recent years have seen a number of high-profile cases of harm associated with the use of algorithms, which highlights the need for AI ethics and algorithm auditing and assurance.

AI auditing is the evaluation of AI systems to assess and mitigate risks related to safety, legality, ethics, and compliance with regulations. The process promotes the effective, fair, and transparent development and deployment of AI systems throughout their lifecycle.

In this blog post, we give an overview of three high-profile cases that highlight the risks associated with the use of algorithms, and outline how applying AI ethics principles could have prevented these harms from occurring.

The need for AI auditing

Arguably one of the most high-profile cases of harm resulting from algorithms. Is associated with the COMPAS tool which was used in the US to predict criminals’ likelihood of recidivism. In other words, the likelihood that a criminal would re-offend. An independent investigation into the tool by ProPublica found that black defendants were almost twice as likely to be misclassified as having a high risk of recidivism compared to white defendants, who were often predicted to be at less risk of recidivism than they actually were. Even when factors such as prior crimes, age, and gender were controlled for, black defendants were almost 50% more likely to be assigned as high risk than white.

The results of this study suggest that the COMPAS tool is biased against black defendants and has higher error rates, or lower accuracy, for black criminals compared to white. AI ethics principles could have been useful in mitigating some of these issues if bias assessments and checks for differential accuracy for differential subgroups were conducted. Further, a deeper examination into the data used to train the models would also likely have been useful since algorithms can reflect and amplify biases in training data, and law enforcement can be biased against black defendants. Therefore, the algorithm could have amplified human biases present in the training data, which could have been mitigated if more action was taken to test for bias.

Another high-profile controversy associated with the use of algorithms is Amazon’s scrapped resume screening tool that penalised applicants who had the word “women's” in their resume (e.g., “women’s chess club captain”). This is because the model was trained on the resumes of applicants who had previously applied to work at Amazon, the majority of whom are male, which reflects gender imbalance in the tech industry. As a result of this imbalance in the training data, the algorithm was biased against female applicants.

However, the tool was not used to judge any candidates and was scrapped before it was released due to this bias being identified. This highlights the importance of checking for bias and other potential issues such as safety, privacy and transparency risks before the product is deployed and used in practice.

Apple also came under fire when the algorithm used to determine credit limits for its Apple Card reportedly gave a much higher credit limit to a man compared to his wife, despite her having a higher credit score. Although Apple was later cleared of illegal activity, and the algorithm does not consider the gender of the applicant when making its decision, this emphasises the importance of examining algorithms and datasets for proxy variables, which can represent protected attributes even if they are not used directly, and robustness to ensure a system does not perfom differently for different subgroups (i.e., whether it consistently gives males a higher credit limit than females).

It also stresses the importance of transparency and explainability when it comes to automated decision tools, so that relevant stakeholders can understand how a system came to a decision and challenge any outcomes they do not agree with.

The above examples highlight why AI ethics and the assurance of algorithms are so important. At Holistic AI, our mission is to enable organisations to deploy and embrace AI with greater confidence and reduce the harm that can result from its use. Our solution provides insight, assessment and mitigation of AI risk. Schedule a call with our team to find out more about how we can help you take command and control over your AI.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo