The rise in the applications of artificial intelligence (AI) has resulted in a number of legal, ethical and safety concerns about its use, which are becoming increasingly important in business and society. Indeed, in response to these concerns, the field of AI ethics has emerged and aims to minimise the potential and actual harm that can come about through the use of AI. One way to do this is to audit and assure AI systems, particularly those that are high impact or high risk. In other words, the systems that can have a significant impact on individuals’ lives, such as those used in recruitment, healthcare, and housing. While auditing and assurance are related concepts and practises, they are distinct. In this blog, we give an overview of algorithm auditing and assurance, outlining the key components of each practice and how they link to each other.
Algorithm Auditing is the research and practice of assessing, mitigating, and assuring an algorithm’s safety, legality, and ethics. Although financial auditing is a well-established and regulated practice, algorithmic audits are more nascent. However, we have recently seen some progress towards algorithm audits becoming enforced like with financial audits due to the mandating of bias audits of automated employment decision systems being used by employers judging candidates in New York City.
Aside from bias, other verticals can be assessed during an audit: explainability (being able to explain how a system comes to a decision), robustness or safety (mechanisms to protect against malicious use and ensure the system functions correctly across different contexts), and privacy (in respect to the personal information used by the system). For each of these areas, an audit can examine whether there are sufficient and appropriate processes in place to prevent harm. Specifically, an audit can examine an algorithmic system at five stages of development:
Once the appropriate assessments of an algorithmic system have been carried out, any risks or harms identified can then be mitigated using relevant risk mitigation strategies.
After assessing the system, and implementing mitigation strategies, the auditing process assesses whether the system conforms to regulatory, governance and ethical standards. This contributes to assurance of the system, which is the process of declaring that a system conforms to predetermined standards, practices or regulations. To assure a system, well-established practices against which to judge how trustworthy a system is are, therefore, required. This can include broad standards, such as the proposed EU AI Act, or narrow regulatory standards, such as those used in financial services (e.g. SEC, FCA, etc.). Impact assessments should also be used to identify the impact or risks associated with a system, which can inform procedures to mitigate known risks and safeguard against unknown risks. Assurance can also include certification (of a system, sectors, engineers, or agents) and insurance of AI systems.
In short, auditing is just one process that can contribute to assuring an algorithmic system, but an audit alone is not sufficient to say that a system is assured – governance and impact assessments are required to facilitate the auditing of a system, and can contribute to insurance and certification of a system. To find out more about how Holistic AI can assist you with auditing and assurance of your algorithmic systems, schedule a call with a member of our team.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts