Regulation of artificial intelligence (AI) is emerging around the globe, particularly in the US and EU, where laws have been proposed and adopted to manage the risks that AI can pose. However, the UK government is yet to propose any AI-specific regulation. Instead, individual departments have published a series of guidance papers and strategies to provide a framework for those using and developing AI within the UK. This blog post summarises these key publications and their main contributions to the AI regulatory ecosystem.
The Information Commissioner’s Office (ICO) was among the first to issue recommendations on the use of AI with their publication of draft guidance on the AI auditing framework on 14th February 2020. The draft, which was open for consultation, aimed to provide a method for AI applications for those responsible for compliance (data protection officers, risk managers, ICO auditors etc.) and technology specialists. The auditing framework states risks in terms of impact on rights and freedoms and offers strategies to mitigate them.
Non-technical responsibilities are to structure governance measures by ensuring that there are appropriate documentation practices and record-keeping and that there is accountability for the system and its outputs. There may also be trade-offs between desirable outcomes, such as privacy vs statistical accuracy, statistical accuracy vs discrimination, explainability vs statistical accuracy, and privacy vs explainability.
On the other hand, technical responsibilities concern the ‘controller’ who makes decisions about the collection and use of personal data, the target output of the model, feature selection, the type of algorithm to use, model parameters, evaluation metrics, and how models will be tested and updated. Risk mitigation strategies can concern key decisions, for example, using data minimisation and privacy-preserving techniques, ensuring representative and high-quality data is used, and post-processing modifications.
Following their Auditing Framework, the ICO published guidance on 20th May 2020 on explaining decisions made with AI in collaboration with the Alan Turing Institute. The guidance provides enterprises with a framework for selecting the appropriate explainability strategy based on the specific use case and sector, choosing an appropriately explainable model, and how tools can be used to extract an explanation from less interpretable models.
The guidance also contains checklists to support organisations on their journey towards making AI more explainable, which are divided into five tasks:
Following up their efforts with a third publication, the ICO published guidance on AI and data protection on 30July 2020, aimed at both those focused on compliance and technology specialists. The significant contribution of this publication was an AI Toolkit, which offers a way to assess the effects AI might have on the rights to fundamental rights and freedoms of individuals, from the initial designing of a system to deployment and monitoring.
The privacy and data protection risks posed by an AI system are ranked from low risk to high risk before and after any action is taken, termed inherent and residual risk, respectively. The tool also offers suggestions for controls that can be implemented to reduce risk and practical steps that can be taken, along with a bank of ICO guidance.
Following these early efforts, the Department of Culture, Media, and Sport (DCMS) published a National Data Strategy on 9December 2020 to outline best practices for the use of personal data, both within the government and beyond, based on four core pillars:
Targeting AI specifically, the Office for Artificial Intelligence published a National AI strategy jointly with the DCMC and the Department of Business, Energy & Industrial Strategy on 22 September 2021. The strategy outlines how the UK government aims to invest and plan for the long-term requirements of the national AI ecosystem, support the adoption and innovation of AI in the UK across sectors and regions, and ensure that there is appropriate national and international governance to support innovation and investment and adequately protect the public from harm.
The strategy can be read as a vision for innovation and opportunity, underpinned by a trust framework with innovation and opportunity at the forefront. Key takeaways are:
Although the National Data Strategy was not explicitly targeted towards AI, the Central Digital and Data Office expanded this strategy by releasing the Algorithmic Transparency Standard on 29th November 2021. The standard aims to support public sector organisations in being more transparent about the algorithmic tools they are using, how they support decisions, and why they are using them. It provides them with a template for mapping this.
The Standard signals that the UK government is pushing forward with the AI standards agenda and ensuring that those standards benefit from the empirical, practitioner-led experience, enabling coherent, widespread adoption. The two-tier approach of the Algorithmic Transparency Standard encourages transparency inclusivity across distinct audiences, encouraging transparency inclusivity across distinct audiences: tier 1 information is non-technical. In contrast, tier 2 information concerns detailed technical information.
Joining the UK’s publication efforts, the Centre for Data Ethics and Innovation (CDEI) released a oadmap to an effective AI assurance ecosystem on 8 December 2021, which formed part of a ten-year plan set forth by the National AI trategy. The roadmap outlines the CDEI’s vision of what a mature AI assurance ecosystem would look like, including the introduction of new legislation, AI-related education and accreditation, and the creation of a professional service for the management and implementation of trustworthy AI systems to benefit the UK economy. Echoing the sentiment of the ICO’s Auditing Framework, one of the components of the ecosystem is AI auditing, including examining risk, bias, compliance, and performance with a certification view.
The roadmap also outlines six key activities for the maturation of the ecosystem:
Following the roadmap, the DCMS, Department for Business, Energy & Industrial Strategy, and Office for Artificial Intelligence jointly released a policy paper on 18 July 2022 establishing a pro-innovation approach to regulating AI. Under this framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies delegated to the appropriate regulator(s). The government will broadly define AI to provide regulators with some direction – adopting fundamental principles relating to transparency, fairness, safety, security and privacy, accountability, and mechanisms for redress or contestability. However, the government will ultimately allow regulators to define AI according to the relevant domains or sectors.
Four principles underpin the framework:
Following on from the National AI Strategy, the Department for Business, Energy and Industrial Strategy, DCMS, and Office for Artificial Intelligence jointly published an AI Action Plan on 18th July 2022. Based on the three pillars of investing in the long-term needs of the AI ecosystem, ensuring AI benefits all sectors and regions, and governing AI effectively, the plan outlines the progress government has made towards fulfilling the goals of the AI strategy throughout 2022. These actions include making funding for postgraduate AI studies available, publishing reports and research, and the government’s participation in global AI forums.
On 7 February 2023, it was announced that a new department for Science, Innovation, and Technology (DSIT) had been created to support the UK’s efforts to be at the forefront of science and technology innovation. Shortly after, on 29 March 2023, DSIT, along with the Office for Artificial Intelligence, published a white paper on the UK’s pro-innovation approach to AI regulation. The paper highlights the UK government’s aim to take a sector-specific approach involving a number of regulators derived from five overarching principles:
With the aim of being an interactive and iterative process, the white paper follows on from the 2022 policy paper and features questions for consultation throughout. The consultation was open from publication to 22 June 2023, allowing various stakeholders to give feedback on the principles proposed by the white paper.
Following the publication of the white paper, the CDEI announced the launch of its Portfolio of AI Assurance Techniques on 7 June 2023. Similar to the OECD’s Catalogue of AI Tools and Metrics, the Portfolio was developed with TechUK and features a range of techniques to support AI assurance, or the evaluation of whether AI systems meet regulatory requirements, relevant standards, ethical guidelines, and organisational values. In particular, the Portfolio captures solutions that can be used across the lifecycle of AI systems, namely impact assessments, impact evaluations, bias audits, compliance audits, certification, conformity assessments, performance testing, and formal verification.
The UK government is yet to take any decisive action in terms of proposing regulation. While there are signals that they do intend to propose regulation in the future, the government have moved slower than the US and EU. They have, however, made it clear that these initatives are only the begining and, over the next 10 years, they aim to "cement the UK's role as an AI superpower".
Doing this will require cooperation between government departments to move the regulatory agenda forward, as well as consultation with technical experts, investment in infrastructure and education, and a dynamic and adaptable approach.
To learn more about how Holistic AI can help you get ahead of this and adopt AI with greater confidence, get in touch with us at we@holisticai.com.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts