How to Regulate AI in Financial Services: The UK Authorities Set Out Their Stall

The Discussion Paper clarifies how existing regulations and guidance -- including on risk management, consumer protection and data protection -- applies to the use of AI. It seeks feedback on whether new regulations are required.
Download our latest
Insight Paper

Key takeaways:

  • UK financial authorities recently published a Discussion Paper on regulating AI and machine learning (ML) in UK financial services.
  • The ‘Discussion Paper’ emphasises that AI poses novel risks and challenges, whilst also amplifying existing risks.
  • Financial services firms’ risk management processes and systems must be reviewed, adapted and revamped to the address the novel risks and challenges of AI use.
  • Firms should take proactive and practical steps to understand and manage their AI risks.

What is the position of the UK Government and regulators?

The debate about how to regulate AI is gathering steam in the UK.

The UK Government, regulators and parliament all agree that AI regulation is a top priority issue, because AI poses novel risks which have the potential to cause significant harm.

The latest intervention in this debate comes from the Bank of England (BoE), the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA).

On 11 October 2022, they published a Discussion Paper on regulating AI and machine learning (ML) in UK financial services.

This comes shortly after the UK Government announced plans to empower regulators to develop context-specific rules and guidance on AI for their respective sectors.

How is AI used in financial services?

This is a mission critical issue for the UK financial services sector.

72% of financial services firms use or develop AI / ML. Adoption rapidly increasing across all areas, with the insurance industry experiencing the largest increase.

On the one hand, AI increases operational efficiencies, enables more accurate decision-making, improves fraud detection and facilitates personalisation of products and services. This improves outcomes for consumers, firms and financial markets.

However, AI can amplify bias and discrimination, lead to inaccurate predictions and flawed decision-making, and AI / ML outputs can be hard to interpret and explain, all of which can harm consumers and risk market and financial stability.

What should financial services firms do next?

The Discussion Paper clarifies how existing regulations and guidance -- including on risk management, consumer protection and data protection -- applies to the use of AI. It seeks feedback on whether new regulations are required.

The regulators are sending a clear message: firms must review, adapt and revamp their risk management processes and systems in the context of AI. This should be prioritised, given the increasingly widespread and material use of AI.

As many AI risks are novel, the approach to managing them must be adapted to reflect that.

Forward thinking firms should follow these developments closely, and take proactive and practical steps to understand and manage their AI risks.

To learn more about the regulation of AI in financial services, download our Insight Paper below.

How to Regulate AI in Financial Services: The UK Authorities Set Out Their Stall

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call