Today marks an important milestone in the UK Government’s commitment to the responsible use of data and technology, with the Department for Science, Innovation and Technology publishing a White Paper on what it claims to be a world leading approach to regulating AI.
Key Aspects of the UK Government's Pro-Innovation AI Regulatory Framework
Seeking to promote responsible innovation and maintain public trust, the publication is based around 5 key principles:
- Safety, security, and robustness: Throughout their lifecycle, there should be adequate mechanisms to ensure that AI systems are continually monitored to identify and manage risks associated with robustness, security, and safety. In other words, systems should be checked to ensure they function reliably, that there are safeguards to minimise security threats, and that safety risks – particularly in applications such as health or critical infrastructure – are managed. In the future, this may result in greater regulation to ensure the reliability and security of AI systems.
- Transparency and explainability: Regulators should have sufficient information about the system’s inputs and outputs in order to support the other four principles. In addition, information about the system should be communicated to relevant stakeholders, with technical standards providing guidance on how to assess, design, and improve AI transparency.
- Fairness: In addition to complying with existing laws like the Equality Act and the UK’s GDPR, there should be fair and equitable development of AI applications across sectors. This seeks to ensure that such applications do not discriminate against individuals, exacerbating societal inequalities or create unfair commercial outcomes, impairing market mechanisms.
- Accountability and governance: Given that there is an entity responsible for the design, development, and deployment of AI systems, it is important that there is accountability for AI systems throughout their lifecycle. There should be effective oversight of the supply and deployment of AI systems and clear regulatory guidance to ensure that there are appropriate governance procedures to ensure regulatory compliance.
- Contestability and redress: Finally, the White Paper seeks to provide avenues for individuals to dispute algorithmic outcomes and seek redress in instances of harm. This would not only build public trust in such systems, but also may serve as an oversight mechanism to ensure responsible AI development.
The White Paper presents a multi-regulator sandbox that builds on budget announcements and removes cross-regulator obstacles, focuses on the regulation of the use of AI rather than the AI technology itself in line with the EU/US, and establishes a central government function to oversee the existing regulatory approaches to AI.
In the following months, the Department plans to collaborate with regulators to provide practical guidance on how organisations can effectively put into practice these principles. This guidance is intended to help businesses build trust and innovate with assurance. To learn more about how to deploy responsible AI, check out our five best practices.