🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Auditing vs Assurance: What’s the Difference?

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Aug 5, 2022
read time
0
min read
share this
Auditing vs Assurance: What’s the Difference?

The rise in the applications of artificial intelligence (AI) has resulted in a number of legal, ethical and safety concerns about its use, which are becoming increasingly important in business and society. Indeed, in response to these concerns, the field of AI ethics has emerged and aims to minimise the potential and actual harm that can come about through the use of AI. One way to do this is to audit and assure AI systems, particularly those that are high impact or high risk. In other words, the systems that can have a significant impact on individuals’ lives, such as those used in recruitment, healthcare, and housing. While auditing and assurance are related concepts and practises, they are distinct. In this blog, we give an overview of algorithm auditing and assurance, outlining the key components of each practice and how they link to each other.

What is an audit?

Algorithm Auditing is the research and practice of assessing, mitigating, and assuring an algorithm’s safety, legality, and ethics. Although financial auditing is a well-established and regulated practice, algorithmic audits are more nascent. However, we have recently seen some progress towards algorithm audits becoming enforced like with financial audits due to the mandating of bias audits of automated employment decision systems being used by employers judging candidates in New York City.

Aside from bias, other verticals can be assessed during an audit: explainability (being able to explain how a system comes to a decision), robustness or safety (mechanisms to protect against malicious use and ensure the system functions correctly across different contexts), and privacy (in respect to the personal information used by the system). For each of these areas, an audit can examine whether there are sufficient and appropriate processes in place to prevent harm. Specifically, an audit can examine an algorithmic system at five stages of development:

  • During data and set up – collection, storage, extraction, normalisation, transformation, and loading of data for use in the algorithm to ensure that the data pipelines are well structured and designed
  • Feature pre-processing – selection, enrichment, transformation, and engineering of a feature space to ensure that the features being used in the model are appropriate
  • Model selection – running model cross-validation, optimization, and comparison to select the most appropriate and best-performing model
  • Post-processing and reporting – the use of threshold, auxiliary tools, and feedback mechanisms to improve interpretability, presentation of results to stakeholders, and evaluation of the impact of the algorithm on the business
  • Production and deployment – the passing of the algorithm through several review processes across several departments to put monitoring and delivery interfaces in place, as well as maintaining records of in-field results and feedback

Once the appropriate assessments of an algorithmic system have been carried out, any risks or harms identified can then be mitigated using relevant risk mitigation strategies.

What is assurance?

After assessing the system, and implementing mitigation strategies, the auditing process assesses whether the system conforms to regulatory, governance and ethical standards. This contributes to assurance of the system, which is the process of declaring that a system conforms to predetermined standards, practices or regulations. To assure a system, well-established practices against which to judge how trustworthy a system is are, therefore, required. This can include broad standards, such as the proposed EU AI Act, or narrow regulatory standards, such as those used in financial services (e.g. SEC, FCA, etc.). Impact assessments should also be used to identify the impact or risks associated with a system, which can inform procedures to mitigate known risks and safeguard against unknown risks. Assurance can also include certification (of a system, sectors, engineers, or agents) and insurance of AI systems.

In short, auditing is just one process that can contribute to assuring an algorithmic system, but an audit alone is not sufficient to say that a system is assured – governance and impact assessments are required to facilitate the auditing of a system, and can contribute to insurance and certification of a system. To find out more about how Holistic AI can assist you with auditing and assurance of your algorithmic systems, schedule a call with a member of our team.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo