Foundation models and generative AI occupy centre stage in the public discourse around artificial intelligence today. Touted to usher in a new era of computing, they enable instant automation, access to information, pattern identification and image, audio and video generation. They serve as building blocks for developing sophisticated single-purpose models that have the potential to bring exponential benefits across a variety of use cases, from content creation and commerce to cancer research and climate change.
However, their unchecked proliferation may also bring about potential risks, harms and hazards – such as the seamless generation of mis/disinformation, creation of dangerous content, copyright infringement, hallucinatory outputs, biased results and the harvesting of large quantities of personal data without informed consent. Harms could pervade beyond the digital environment, with increasing concerns over the implications of such models replacing human labour, and large carbon footprints associated with the development and deployment of such models.
Indeed, concerns over these negative consequences have been voiced by a range of stakeholders, spanning civil society, academia, and industry. Scholarly research increasingly highlights the potential harm caused by biased outputs, while global coalitions are issuing alarms about AI's capacity to drive human extinction, advocating for a moratorium on the development of such technologies.
With the growing imperative to regulate foundation models, policymakers around the globe are embracing a range of strategies. Leading this charge is the European Union, through the implementation of the EU AI Act. This legislation has been recently revised to include provisions specifically addressing the usage of foundation models within the EU single market. Given the wide-reaching influence of the EU AI Act due to the ‘Brussels Effect’, this blog explores the EU's approach to regulating foundation models and generative AI.
Key takeaways:
Initial versions of the EU AI Act proposed by the European Commission did not include obligations on foundation models, partly due to the novelty and relative lack of awareness of the technology. More importantly, the existing structure of the EU AI Act – focused on regulating specific use-cases of technology – proved to be counterproductive for foundation models that have the capability to be flexibly deployed across diverse contexts.
As domain experts have pointed out, limiting these models to specific use-cases that are High-Risk (Annex III) or Prohibited (Article 5) would have been too static an approach, rendering the legislation replete with limitations and discrepancies even before its enforcement. Such concerns, coupled with rising popularity of foundation models like ChatGPT and Bard, and growing public discourse around their many use-cases, implications and potential risks prompted the EU (particularly the Parliament) to draft rules to explicitly cover these models.
On 14 June 2023, Members of the European Parliament (MEPs) passed the latest version of the EU AI Act and introduced a new section, Article 28 b, to govern foundation models and generative AI. Currently progressing through the trilogue stages between the EU Commission, Parliament and the Council, the legislation now mandates a set of nine ex-ante obligations on providers of foundation models to ensure they are safe, secure, ethical and transparent. Significantly, the EU AI Act is cognisant of the many use-cases for which these models can be adapted and in doing so, targets players across the AI value-chain, covering models that can be made available through open-source and licensing channels and can be used in several downstream applications.
Recital 60 e, which was also added in the latest version of the Act’s text defines foundation models as:
“AI models are developed from algorithms designed to optimize for generality and versatility of output. (These) models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained.”
The legislation further clarifies this definition in Recital 60 g, stating that pre-trained models designed for "narrower, less general, more limited set of applications” should not be considered foundation models due to their greater interpretability and predictability.
The EU AI Act also defines Generative AI in Article 28 b (4) as:
“AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video.”
Acknowledging the many complexities and uncertainties pervading in the foundation model ecosystem, clarification of roles of actors in the AI value chain, the lack of expertise in conducting conformity assessments for these models, or the absence of standardised third-party audit and assurance mechanisms, the Commission and the proposed AI Office have been tasked with periodically monitoring and assessing the legislative and governance framework around these systems.
Under Article 28 b, providers of foundation models are required to ensure compliance before putting their product in the EU market through the following mechanisms:
Further, providers of foundation models are expected to cooperate with downstream operators on regulatory compliance throughout the duration of the system’s lifecycle, if the model in question has been provided as a service through Application Programming Interface (API) access. However, if the provider fully transfers the training model along with detailed information on datasets and the development process, or restricts API access, downstream operators are expected to comply with the regulation without further support (Recital 60 f).
The EU AI Act is one of many emerging endeavours to govern foundation models and generative AI. Concerted regulatory momentum to legislate these technologies is increasing across the world – and companies desirous of developing and deploying such models must proactively ensure they fulfil the increasing list of compliance obligations.
Holistic AI takes a comprehensive, interdisciplinary approach to responsible AI. We combine technical expertise with ethical analysis to assess systems from multiple angles. Evaluating AI in context, we identify key issues early, considering both technologies factors and real-world impact to advance the safe and responsible development and use of AI. To find out more about how Holistic AI can help you, schedule a call with our expert team.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts