🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
→
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
→
Learn more about EU AI Act

G7 Countries Release International Guiding Principles and a Code of Conduct on Governing Advanced AI Systems

Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Published on
Nov 2, 2023
share this
G7 Countries Release International Guiding Principles and a Code of Conduct on Governing Advanced AI Systems

On October 30, 2023, the G7 nations — Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States — unveiled International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI Developers.

Building on the progress made by relevant ministers on the Hiroshima AI Process, including the G7 Digital & Tech Ministerial Declaration issued in September, the Guiding Principles and Code of Conduct will complement emerging national regulations to foster a fit-for-purpose global governance charter on AI.

This development is particularly relevant considering developments on AI Governance this week – ranging from the Bletchley Declaration at the UK’s AI Safety Summit, the Biden-Harris Administration’s Executive Order on AI, and the establishment of the United Nations’ High Level Advisory Body. These initiatives highlight the urgency among policymakers worldwide to chart regulatory pathways to govern AI responsibly.

The Hiroshima Principles: 11 Action Points

Expanding upon the established OECD Principles on AI, the G7 Guiding Principles presents 11 actionable guidelines for organisations involved in developing advanced foundational models. These guidelines are not exhaustive and will undergo regular review and updates as needed. This will involve ongoing inclusive consultations with multiple stakeholders to ensure that the code remains pertinent and adaptable to the swiftly evolving landscape of this technology.

Based on these guidelines, an International Code of Conduct has been developed, under which organisations developing advanced AI systems should seek to:

  1. Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
  1. Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.
  1. Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.
  1. Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.
  1. Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures.
  1. Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
  1. Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
  1. Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
  1. Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.
  1. Advance the development of and, where appropriate, adoption of international technical standards, and
  1. Implement appropriate data input measures and protections for personal data and intellectual property.

AI regulation is gaining momentum

With emerging regulatory paradigms abound, it is crucial to prioritise the development of AI systems that promote ethical principles such as fairness and harm mitigation right from the outset.

At Holistic AI, we have pioneered the field of AI ethics and have carried out over 1000 risk mitigations.

Using our interdisciplinary approach that combines expertise from computer science, law, policy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context in which it is used.

To find out more about how Holistic AI can help you, schedule a call with one of our experts.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo