What is algorithm auditing?
Algorithm auditing is the process of independent third-party assessment of an automated or algorithmic decision making system. An audit assesses whether an algorithmic system is safe to use and avoids ethical risks such as privacy violations, unfairness, and harm to individual rights.
Legislators in the EU, UK, and US have indicated that algorithm auditing will soon become part of the digital economy's regulatory ecosystem.
Planning for compliance
Companies can act proactively to prepare for regulation. Regulators are looking to industry stakeholders to set the standards for AI safety, fairness and compliance, giving first-movers a strategic advantage in shaping the field.
Preemptive auditing and assurance of AI will save companies the costs of late compliance and give them time to optimize their fully-compliant systems. Auditing and assurance protects companies from the financial and reputational costs of systemic risks. Preempting regulation and implementing best practice will have a reputational benefit for enterprises.
Do you need to audit your AI?
Our cheat sheet will tell you whether an audit is of critical importance or whether it would be prudent to perform a preemptive audit. This depends on:
- Jurisdiction. Audit requirements depend on the country in which you operate, but also the residence status of those affected by your AI processes. E.g. if your business is based in Delaware but your AI makes decisions affecting residents of New York, then you are in scope for New York legislation.
- Use of AI. Regulation has focused on AI for employment decisions (related to hiring, promotion, and work scheduling) and critical decisions (defined broadly as decisions that significantly affect natural persons).