ISO/IEC 42001:2023 – AI Standard on Establishing, Maintaining and Improving AI Management Systems

December 20, 2023
Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
ISO/IEC 42001:2023 – AI Standard on Establishing, Maintaining and Improving AI Management Systems

On December 18th, 2023, the International Standardisation Organisation (ISO) and the International Electrotechnical Commission (IEC) published a new standard, ISO/IEC 42001. This standard is a voluntary guideline designed to help organizations set up, implement, and continuously improve their Artificial Intelligence Management Systems (AIMS).

What you need to know: ISO/IEC 42001 is a voluntary standard you can utilize as a trust-building tool in the form of a certification or general framework for governing AI.

ISO/IEC 42001 is a significant step forward in AI standards, offering a detailed governance framework for trustworthy and responsible AI use.

As a flexible standard, organizations can choose to certify their AIMs under the standard. It also supports auditing, laying the foundation for external certification and auditing of AI systems, in line with the risk assessment framework in the upcoming ISO/IEC 42006 standard. ISO/IEC 42001 is scalable, making it suitable for organizations of all sizes and sectors.

ISO 42001 is part of a broader suite of standards aiming to govern best practices for trustworthy AI development, deployment, and improvement, including:

Potential Downstream Implications

ISO/IEC 42001 will likely guide emerging regulations – primarily in the European Union and United States – in the form of guiding principles and mechanisms for future legislation.

What you need to know: Pre-emptively tracking ISO/IEC developments can help your organization to be prepared in advance of future legal requirements.

For instance, policymakers behind the EU AI Act (which obtained a provisional agreement on 8 December 2023) may choose to adopt the standard’s objectives and modalities to guide regional standardisation efforts by bodies like CEN/CENELEC and ETSI on conformity assessment procedures for High-Risk AI Systems (HRAIS).

Similarly, legislative efforts on AI in the United States, which have particularly intensified after the Biden-Harris Administration’s recent Executive Order – may seek to integrate aspects of ISO/IEC 42001 and the NIST’s Risk Management Framework (RMF) into future regulatory frameworks.

The Importance of AI Standardisation

AI regulations are rapidly gaining momentum in jurisdictions like the European Union, United Kingdom, and United states. Standards are growing in importance as well, often as the underlying principles and guidelines behind formal legislation.   For instance, the EU AI Act aims to fulfil its objectives regarding AI trustworthiness, accountability, risk management, and transparency by adopting harmonised procedural, performance and measurement standards.

The urgent need for standardisation is further underscored by the lack of harmonisation in the regulatory language used across different regulations. This has translated to a lack of global alignment and consensus on crucial issues like AI taxonomy, governance mechanisms, assessment methodologies, and measurement.

By establishing clear and universally accepted standards, a more coherent and consistent approach to governing AI technologies, mitigating risks, and fostering responsible and ethical AI development and deployment can be realised.

Standards with Holistic AI

For organizations using AI in their business, third-party audits and other conformity assessment processes are increasingly demanded by emerging regulations.

Through our proprietary Platform and suite of innovative regulatory compliance solutions, Holistic AI can help you operationalize technical standards at scale, giving you the tools to ensure your AI systems are developed and deployed safely, effectively, and in line with compliance obligations.

We support trustworthy AI and enable teams to build trust with internal and external stakeholders for more widespread and effective AI adoption:

  1. AI Assessments: Through quantitative and qualitative assessments, we ensure the dependability of AI-driven products across five key verticals: efficacy, robustness, privacy, bias and explainability.
  1. Third-party Risk Management: Customised mitigation and recommendations to manage AI risks.‍
  1. Compliance Support: Assess compliance against applicable AI regulations and industry standards

Schedule a call with one of our specialists to find out more about how we can help your organization.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call