🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

The Holistic AI View: Progress as 7 US Juggernauts Commit to Managing AI Risk

Authored by
Monica Lopez
Senior Policy Expert at Holistic AI
Published on
Jul 26, 2023
share this
The Holistic AI View: Progress as 7 US Juggernauts Commit to Managing AI Risk

On Friday 21 July, the Biden-Harris Administration announced that they had secured "Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI."

The agreement encompasses seven leading US AI companies – Amazon, Anthropic, Inflection, Google, Meta, Microsoft, and OpenAI – and comes at a critical moment in the development of the nation's stance on responsible AI governance.

Holistic AI welcomes this significant step forward, which represents a pathway towards ensuring AI-enabled products are safe, secure and trustworthy prior to entering users' hands.

The measures are necessary as, without these guardrails in place, AI technologies pose multiple risks – both in the US and beyond. International cooperation is becoming increasingly important, especially with the European Union's sweeping Artificial Intelligence Act having now entered the final stage of the lawmaking process.

As an AI Governance, Risk and Regulatory Compliance platform with a mission to empower enterprises to adopt and scale AI with confidence, we uphold the latest initiative in the U.S. and stand ready and available to participate in efforts by the administration and said companies moving forward.

Forging a balance between supporting innovation and promoting responsible AI governance across the nation – and indeed the world – is critical for reinforcing principles of safety, security, and trust.

That need is reflected in the White House's statement, which emphasizes the following:

Safety – Ensuring the safety of AI systems prior to deployment

  • "Companies commit to internal and external security testing prior to releasing their system(s).”
  • “Companies commit to sharing information on managing AI risks within their system(s) across industry and with governments, civil society and academia.”

Security – Building AI systems with security as a priority

  • “Companies commit to investing in robust cybersecurity and insider threat safeguards for their system(s).”
  • “Companies commit to third-party discovery and reporting of system vulnerabilities.”

Trust – Earning the public’s trust through transparency of AI systems

  • “Companies commit to developing robust technical mechanisms for their system(s) to ensure users know when content is AI-generated.”
  • “Companies commit to publicly reporting their system(s)’ capabilities and limitations, and the areas of appropriate and inappropriate use.”
  • “Companies commit to developing and deploying their system(s) to address society’s grand challenges and benefit humanity.”

Holistic AI's deep practical expertise and supporting in-house academic research has led us to support a number of other positions, which we believe can act as catalysts for the fulfilment of the terms outlined in the White House's statement.  

1. A commitment to robust long-term research partnerships and collaborations between industry, academia and the public sector

Bridging the gap between innovative AI research and practical real-world applications is crucial. Key issues around responsible and ethical AI like mitigating bias, ensuring model validity, protecting data privacy, and enabling transparency need collaborative, consensus-based solutions. Innovations in AI require rigorous study, testing, assessment and standardisation to translate ideas into robust and beneficial implementations.  

2. Dedication to the development of robust human-in-the-loop systems

Understanding an AI system's level of automation is vital. End users should know when they interact with AI. A human-centred approach promotes human well-being by building knowledge and providing information. This requires transparency from all stakeholders on how the system operates and interprets outputs, including clear opt-out mechanisms.

3. A commitment to convening and working with diverse stakeholders, particularly those outside large and already well-established technology leaders

The ubiquity of AI-enabled systems and the extent of their impact on users is too large to ignore. Small and medium-sized enterprises (SMEs) offer unique value from the direct frontline product-to-consumer relationship they have with their customers and the day-to-day management of customer understanding and experience. No voice is too small. This engagement can be achieved through active participation with SMEs to address concerns of affected parties.

4. Pledge to incentivise the adoption of risk management protocols.

The identification and evaluation of a system’s risks and safety issues is now an imperative throughout the research and development process. Responsibly validating the behaviour of systems through continuous documentation and rigorous testing at each stage can help mitigate unintended consequences before deployment.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo