The International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC) have made ISO/IEC 22989, an AI standard defining terminologies on various aspects of an AI system, available to the public.
The foundational standard defines more than 110 key concepts in the field of AI, including terms like ‘datasets’, ‘AI agents’, ‘transparency’, and ‘explainability’.
Also providing conceptual guidance on aspects associated with Natural Language Processing (NLP) and Computer Vision (CV) models, ISO/IEC 22989 aims to promote the development of a shared vocabulary, terminology, and framework for essential AI concepts, thereby facilitating dialogue between stakeholders.
A crucial building block in articulating different aspects of AI systems, ISO/IEC 22989 is expected to pave the way for the development of technical standards focused on establishing performance baselines, processes, and protocols on responsible AI development and deployment, as well as metrics to gauge model efficacy.
The standard further clarifies that trustworthiness also encompasses reliability, availability, resilience, security, privacy, safety, accountability, transparency, integrity, authenticity, quality and usability.
The standard defines a stakeholder as “any individual, group or organization that can affect, be affected by or perceive itself to be affected by a decision or activity” and provides a comprehensive map of the AI stakeholder ecosystem, clearly delineating the different kinds of players associated with an AI system. ISO/IEC 22989 also notes that a single entity can take on multiple stakeholder roles.
Considering the range of stakeholders that can be invested in a single AI system, the standard highlights the need for multi-stakeholder consultations that represent diverse subject-matter expertise to better identify the risks of each system and ensure regulatory and compliance.
With AI regulations multiplying rapidly in jurisdictions like the European Union, United Kingdom, and United states, the need for governance standards is growing increasingly clear. For instance, the EU AI Act aims to fulfil its objectives regarding AI trustworthiness, accountability, risk management, and transparency by adopting technical and procedural standards.
The urgent need for standardisation is further underscored by the lack of harmonisation in the regulatory language used across different regulations. This has translated to a lack of global alignment and consensus on crucial issues like AI taxonomy, governance mechanisms, assessment methodologies, and measurement.
By establishing clear and universally accepted standards, a more coherent and consistent approach to governing AI technologies, mitigating risks, and fostering responsible and ethical AI development and deployment can be realised.
For organisations using AI in their business, third-party audits and other conformity assessment processes are increasingly demanded by emerging regulations.
Holistic AI are Governance, Risk, and Compliance specialists. Through or proprietary AI Governance Platform and suite of innovative solutions, we can help you operationalise technical standards at scale, giving you the tools to ensure your AI systems are developed and deployed safely, effectively, and in line with compliance obligations.
We assist organisations to demonstrate responsible AI practices to regulators and consumers through:
Schedule a call with one of our specialists to find out more about how we can help your organisation.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts