UPDATE: This article was first published in July 2022. The UK Department for Science, Innovation and Technology has since presented a policy paper, in March 2023, reinforcing similar commitments to the pro-innovation approach that was proposed in July 2022. The consultation period on the white paper concluded on 21 June 2023 and the department are now analysing the feedback they have received.
Earlier this week, a joint publication by the Department for Digital, Culture, Media & Sport, Department for Business, Energy & Industrial Strategy, and Office for Artificial Intelligence proposed the establishment of a pro-innovation framework for regulating artificial intelligence (AI) in the UK.
Under this framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies being delegated to the appropriate regulator(s). The Government will broadly define AI to provide regulators with some direction – adopting key principles relating to transparency, fairness, safety, security and privacy, accountability, and mechanisms for redress or contestability – but will ultimately allow regulators to define AI according to the domains or sectors that it is used in. This is in contrast to other approaches to regulating AI that have been proposed in the EU and US, which seek to govern AI at a central level and place greater emphasis on the impact of the system than its use. The UK government asserts that its context-driven approach provides more opportunities for innovation.
Indeed, one of the key themes of the framework is a pro-innovation approach. Giving the example of an AI-first start-up offering automated customer-facing processes that is planning to expand its reach into multiple regulated sectors, the framework envisions that innovation will not be hindered by costs associated with ambiguous compliance guidelines, with regulators coordinating to clearly communicate expectations to businesses and providing them with guidance highlighting relevant requirements.
Building on this, another key feature of the framework is coherence, where the aim is to create cross-sectoral principles that can be interpreted and enforced by regulators within the context of the AI being used in each sector, with the prioritisation of these principles at the discretion of each regulator. This approach is intended to create flexibility for regulators, who can enforce the regulation in a way that is appropriate to the use of AI within that sector. While this could be read as lacking coherence if each sector has different priorities, the government proposes that coherence will be maximised by having an easy-to-navigate framework and sufficient support to enable cross-sector coordination, as well as overarching principles that would apply to all sectors. The government has also committed to exploring whether existing infrastructure can support this coordination, or whether additional mechanisms are needed.
The framework also calls for an evidence-based approach where regulators will focus on high-risk applications, rather than hypothetical risks or low-risk applications. Arguing that this avoids the introduction of unnecessary barriers to innovation, the government could see some users find loopholes in the regulation if they develop a new application with little evidence for harm due to its novelty.
Finally, the framework intends to create regulation that is both proportionate and adaptable, recommending that regulators first set out voluntary measures or issue guidance, before implementing compulsory measures. This echoes the sentiment of GDPR, which had a two-year grace period when it came into effect. However, since regulators will only be recommended to do this, it could lead to some sectors having more stringent rules quicker than others, which could compromise the vision for cross-sector coherence.
The framework welcomes views on the proposal from stakeholders in business, civil society, academic and wider sectors by 26th September 2022, ahead of the publication of a white paper towards the end of the year. Here, more granular details about the framework, along with implementation plans, will be presented.
The UK has not yet introduced any regulations specific to AI, but it is in the process of adopting a pro-innovation approach to the technology. Two policy papers have been authored, one in July 2022 and one in March 2023, outlining an approach which aims not to stifle businesses.
After submitting its second pro-innovation policy paper in March 2023, a consultation period began wherein the Department for Science, Innovation and Technology invited feedback on the proposal. That period ended on 21 June 2023.
Some commentators have saluted the UK's approach for its business-centred ethos, its evidence-based principles, and its adaptability. On the flip side, however, some have argued that that UK's approach – which emphasises voluntary standards – is not strict enough.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts