🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Generative AI: A Regulatory Overview

Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Published on
May 16, 2023
read time
0
min read
share this
Generative AI: A Regulatory Overview

The future of artificial intelligence (AI) is at an inflection point with the mass-adoption of Generative AI. Comprising of Large Language Models (LLMs), transformers and other neural networks, Generative AI and Foundation Models can create new outputs based on raw data. These serve as building blocks for developing more complex and sophisticated models that have the potential to bring exponential benefits across a variety of use-cases, from commerce and cancer research to climate change.

In its current form, Generative AI is seeing widespread use across a variety of applications – from chatbots like ChatGPT, to synthetic audio, image, and video generators like DALL-E and Stable Diffusion. However, there is increasing concern that despite in-built safeguards, these models might be misused to proliferate mis/disinformation, create inappropriate content, and harvest huge quantities of personal data without informed consent.

With regulatory interest heightening, governments worldwide have accelerated efforts to understand and govern such models. In this blog, we examine some of these emerging regulatory approaches.

Key takeaways

  1. The European Union (EU) is seeking to establish comprehensive regulatory governance through the AI Act, by introducing a tier-based approach for Foundation Models and Generative AI, with stricter transparency obligations for the latter.
  1. Further, the EU is keen to establish a multi-pronged governance regime for these models and has included Generative AI in its recent draft rules on auditing algorithms under the Digital Services Act (DSA).
  1. The United States is exploring "earned trust" in AI systems and is seeking to understand modalities in algorithmic audits, to ensure the responsible innovation of AI systems across industries.
  1. While there are no federal bills seeking to regulate Generative AI in the US, Massachusetts is the only state to have introduced a legislation on Generative AI (S.31) that mandates privacy and algorithmic transparency standards for companies developing such models.
  1. China has issued draft rules to regulate Generative AI providers, requiring compliance with measures on data governance, bias mitigation, transparency, and content moderation in line with Chinese societal values, as well as mandating security assessments before releasing Generative AI services to the public.
  1. India and the UK have taken a light-touch approach to regulating Generative AI, with the former ruling out the need for AI-specific legislation, and the latter conducting an initial review of Foundation Models to ensure market competitiveness and consumer protection.

Generative AI regulation in The European Union

Seeking to establish the world’s first comprehensive regulatory playbook on artificial intelligence with the EU AI Act, the European Union is actively integrating foundation models and generative AI into its legislation. In the Act's latest compromise text, which was jointly adopted by the European Parliament's Internal Market and Civil Liberties Committees on 11 May, lawmakers agreed to a tier-based approach for foundation models and generative AI. (Update: The European Parliament passed the draft Act on 14 June 2023)

Differentiating Foundation Models from broader General Purpose AIs, Members of the European Parliament (MEPs) introduced a new section (Article 28b) to the adopted text to specifically govern the former. This section directs providers of Foundation Models to integrate design, testing, data governance, cybersecurity, performance, and risk mitigation safeguards in their products before placing them on the market, such that foreseeable risks to health, safety, human rights, and democracy are mitigated. Further, the text mandates providers of such models to comply with European environmental standards and register their applications in a database which will be managed by the European Commission.

Also covered under the same section, Generative AI services will be subjected to stricter transparency obligations, with providers of such applications required to inform users when a piece of content is machine-generated, deploy adequate training and design safeguards, and publicly disclose a summary of copyrighted materials used to develop their models.

In addition to the EU AI Act, other efforts are being undertaken to regulate Generative AI applications. Data protection and privacy risks for instance, are being addressed through the GDPR. Just in April, Italy’s Data Protection Authority (DPA) – the Garante, restricted the use of ChatGPT in the country citing no legal basis for its data collection, ordering its maker OpenAI to implement privacy and age-verification safeguards. While OpenAI complied expeditiously, and the ban was subsequently lifted, this prompted parallel investigations from DPAs in Germany, France, and Ireland over the LLM’s data collection practices.

Finally, it is important to mention the EU’s mainstay legislation on problematic content – the Digital Services Act (DSA). Containing provisions to audit algorithms used in content moderation, the current iteration of the primary regulation does not cover Generative AI yet, creating what academics call a ‘dangerous regulatory gap’. However, it seems like the European Commission has taken note of this discrepancy by including generative models in its recent draft rules on auditing algorithms under the DSA. Ultimately, it will be interesting to see whether these regulations will complement each other to create a comprehensive governance regime that is able to keep pace with the rapidity of Generative AI.

Regulation of Generative AI In the United States

On 11 April 2023, the United States’ National Telecommunications and Information Administration (NTIA) issued a request for public comments on creating ‘earned trust’ in AI systems. The NTIA aims to understand what kind of data is needed to conduct algorithmic audits and how regulators can ensure the responsible and ethical innovation of AI systems across all industries. This is in line with other regulatory efforts, such as the National Institute of Standards and Technology's (NIST) AI Risk Management Framework, recent industry commitments to participate in public assessments of Generative AI systems, and the Biden Administration's Blueprint for an AI Bill of Rights.

While there is currently no specific endeavour to govern Generative AI tools at the federal level, it is worth noting that US lawmakers have introduced several legislations covering a number of aspects of automated systems. In these, the Algorithmic Accountability Act leads the pack and if promulgated, would require covered entities to conduct annual impact assessments and audits overseen by the Federal Trade Commission. At present, Massachusetts stands out as the sole state that has introduced a bill aimed at regulating Generative AI. Bill S.31 was in fact, drafted using ChatGPT, and provides Operating Standards on privacy and algorithmic transparency that companies developing Generative AI models must adhere to.

China, India, and the UK’s Generative AI Regulations

On the same day as the NTIA’s announcement, China’s Cyberspace Administration (CAC) issued draft rules to regulate Generative AI providers. The Administrative Measures for Generative Artificial Intelligence Services seeks to govern Generative AI applications spanning across text, audio, video, and coding capabilities. Open for public comment until 10 May 2023, the draft covers measures on data governance, quality of training data, bias mitigation, and transparency that providers of Generative AI applications must adhere to. The draft specifically targets the issue of content moderation and requires concerned companies to ensure that synthetic content created must be in line with Chinese societal values, free of misleading information, and does not infringe intellectual property. Further, it directs providers to prevent discrimination in training data, deploy proactive filtering measures to remove inappropriate content, and employ labelling techniques to distinguish synthetic media in accordance with China’s Deep Synthesis Provisions.

The Draft also mandates all providers to perform a security assessment before releasing a Generative AI service to the public. In line with the CAC’s Assessment Provisions, this can be conducted independently or through a third-party entity and needs to meet conformity requirements established by the CAC on protection of personal information, identity verification and algorithmic transparency. Finally, providers are required to take measures to prevent their users from getting profiled or addicted to synthetic content and provide channels for swift grievance redressal.

Concerned with clamping down on innovation, India and the UK have taken a different approach to governing Generative AI – asserting the need for a light-touch regulatory regime. While India has currently ruled out the need for an AI-specific legislation, regulatory efforts in the UK seem to have taken steam with the Competition and Markets Authority (CMA) announcing an initial review of Foundation Models, which will help the antitrust regulator understand the AI market and what principles need to be adopted to ensure market competitiveness and consumer protection.

Regulations are coming. Get ready

With new regulatory paradigms abound, it is crucial to prioritize the development of AI systems that prioritize ethical principles such as fairness and harm mitigation right from the outset. At Holistic AI, we have pioneered the field of AI ethics and have carried out over 1000 risk mitigations. Using our interdisciplinary approach that combines expertise from computer science, law, policy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context it is used in.

To find out more about how Holistic AI can help you, schedule a demo with us.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo