🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Holistic AI’s Response to the House of Lords Communication and Digital Committee’s Call for Evidence on Large Language Models

Authored by
No items found.
Published on
Sep 7, 2023
share this
Holistic AI’s Response to the House of Lords Communication and Digital Committee’s Call for Evidence on Large Language Models

Holistic AI has submitted its response to the House of Lords Communication and Digital Committee's Call for Evidence on Large Language Models (LLMs).

The Committee – a cross-party group populated by 13 members of the House of Lords, the UK Parliament's second chamber – invited written evidence to inform its inquiry into LLMs on 7 July 2023.

This followed the publication of a White Paper by the UK Government on AI regulation in March.

Broadly, the inquiry focused on four areas of investigation:

  1. How will LLMs develop over the next three years? What are the greatest opportunities and risks?
  2. How adequately does the AI White Paper (alongside other Government policy) deal with LLMs? Is a tailored regulatory approach needed?
  3. Do the UK’s regulators have sufficient expertise and resources to respond to LLMs? If not, what should be done to address this?
  4. How does the UK’s approach compare with that of other jurisdictions, notably the EU, US and China? Will there need to be international coordination in the regulation of AI?

Key takeaways from our submission

Holistic AI's firmly held belief is that a swell in the number of LLM applications over the next three years will be met by increasing complexity in the AI value chain as well as increased instances of risks, hazards and harms.

Examples of specific risks include adversarial attacks, biased outputs, privacy breaches, opaque decisions, and underperformance – and these risks fall into five categories: robustness, bias, privacy, explainability, and efficacy.

The UK's AI White Paper is, in Holistic AI's view, effective in terms of fostering innovation and addressing risks, but regulatory measures should be supplemented by non-regulatory tools. A tailored approach should prioritise AI risk management, assurance, and governance, with four-stage algorithmic audits (triage-assessment-mitigation-assurance) playing a central role.

In terms of the international context, Holistic AI believes the UK's approach to regulation is relatively unobstructive in comparison to its counterparts in Europe, America, and China, giving the nation an advantage in terms of harnessing the power of AI safely, responsibly, and ethically. However, there is no one 'right' regulatory stance. Global alignment in regulatory taxonomy, governance tools, and measurement is, therefore, imperative.

The UK is adopting a lighter-touch approach to AI regulation. Nevertheless, UK organisations using AI in their business will, like their counterparts worldwide, find themselves having to adapt to shifting regulatory conditions as the use of LLMs and AI more generally continues to proliferate.

Holistic AI specialise in AI policy and can help steer your organisation through the evolving conditions. Schedule a call with one of our governance, risk, and compliance experts to find out more.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo