Abstract. Financial services face a dilemma concerning artificial intelligence (AI). On the one hand, AI-adoption is, increasingly, a necessary condition for remaining competitive in financial services: AI is becoming ubiquitous in trading systems, fraud detection, credit scoring, customer care, and human resource management. On the other hand, AI occasions serious financial, ethical, and reputational risks, as illustrated by high-profile scandals at Knight Capital (where faulty AI led to the company’s bankruptcy) and Amazon (where a recruitment AI was found to be gender-biased). It is increasingly also a legal liability: regulators across jurisdictions have proposed regulations to limit high-risk AIs and penalize irresponsible service providers.
A strategy to resolve is dilemma is through AI auditing and assurance. The purpose of algorithmic auditing is to assess AI systems by assessing the system’s risks–both technical and governance-related–across several verticals: privacy, robustness, transparency, and fairness. An audit then recommends risk-mitigation strategies and eventuates in an assurance that certifies the system’s compliance with predetermined standards. In our open access chapter, we map out the auditing process, explaining its verticals and their regulatory significance. We also look at the current financial regulation, likely future financial regulation, and the current proposals for AI regulation to describe how these could and should operate effectively together. Finally, we provides a case study of an audit in financial services: testing a credit scoring system for bias on the basis of protected characteristics.