Earlier this week, a joint publication by the Department for Digital, Culture, Media & Sport, Department for Business, Energy & Industrial Strategy, and Office for Artificial Intelligence proposed the establishment of a pro-innovation framework for regulating artificial intelligence (AI) in the UK.
Under this framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies being delegated to the appropriate regulator(s). The Government will broadly define AI to provide regulators with some direction – adopting key principles relating to transparency, fairness, safety, security and privacy, accountability, and mechanisms for redress or contestability – but will ultimately allow regulators to define AI according to the domains or sectors that it is used in. This is in contrast to other approaches to regulating AI that have been proposed in the EU and US, which seek to govern AI at a central level and place greater emphasis on the impact of the system than its use. The UK government asserts that its context-driven approach provides more opportunities for innovation.
Indeed, one of the key themes of the framework is a pro-innovation approach. Giving the example of an AI-first start-up offering automated customer-facing processes that is planning to expand its reach into multiple regulated sectors, the framework envisions that innovation will not be hindered by costs associated with ambiguous compliance guidelines, with regulators coordinating to clearly communicate expectations to businesses and providing them with guidance highlighting relevant requirements.
Building on this, another key feature of the framework is coherence, where the aim is to create cross-sectoral principles that can be interpreted and enforced by regulators within the context of the AI being used in each sector, with the prioritisation of these principles at the discretion of each regulator. This approach is intended to create flexibility for regulators, who can enforce the regulation in a way that is appropriate to the use of AI within that sector. While this could be read as lacking coherence if each sector has different priorities, the government proposes that coherence will be maximised by having an easy-to-navigate framework and sufficient support to enable cross-sector coordination, as well as overarching principles that would apply to all sectors. The government has also committed to exploring whether existing infrastructure can support this coordination, or whether additional mechanisms are needed.
The framework also calls for an evidence-based approach where regulators will focus on high-risk applications, rather than hypothetical risks or low-risk applications. Arguing that this avoids the introduction of unnecessary barriers to innovation, the government could see some users find loopholes in the regulation if they develop a new application with little evidence for harm due to its novelty.
Finally, the framework intends to create regulation that is both proportionate and adaptable, recommending that regulators first set out voluntary measures or issue guidance, before implementing compulsory measures. This echoes the sentiment of GDPR, which had a two-year grace period when it came into effect. However, since regulators will only be recommended to do this, it could lead to some sectors having more stringent rules quicker than others, which could compromise the vision for cross-sector coherence.
The framework welcomes views on the proposal from stakeholders in business, civil society, academic and wider sectors by 26th September 2022, ahead of the publication of a white paper towards the end of the year. Here, more granular details about the framework, along with implementation plans, will be presented.