Mandated under the National Artificial intelligence Initiative Act of 2020, the National institute for Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF) is a voluntary risk management framework to be a resource for organizations who design, develop, deploy or use AI systems. The framework is intended to help organizations manage the risks of AI, promote trustworthy and responsible development and use of AI systems while being rights-preserving and non-sector specific.
The NIST AI RMF is operationalised through a combination of five tools or elements – namely the NIST Core, AI RMF Playbook, Roadmap, Crosswalks and Use-Case Profiles – which help establish the principles a trustworthy AI system should have, what actions should be taken to ensure trustworthiness in an AI system’s development and deployment lifecycles, as well as practical guidance on doing the same. In this blog post, we delve into what you need to know about each of these elements of NIST’s AI RMF.
Key Takeaways:
The AI RMF Core is the foundation of the AI RMF. After providing a comprehensive overview of the attributes a trustworthy AI system should seek to have, the AI RMF provides four key functions, namely – Govern, Map, Measure, and Manage – that organizations can adopt to develop and deploy trustworthy AI systems across use-cases and domains. Broadly, these functions seek to ensure that requisite systems, processes and tools are developed across organisational contexts to cultivate and sustain a culture of risk management (Govern), risk profiles of AI systems are identified and contextualised to the use-cases they are deployed in (Map), they are effectively assessed, measured and tracked (Measure), and finally are prioritised and addressed proactively (Manage).
The AI RMF Playbook serves as a practical companion to the AI RMF, offering actionable and adaptable guidance for organizations. Built on the AI RMF Core, it facilitates seamless implementation of the four key functions by providing a comprehensive list of sub-actions to fulfill each core function. An organization can thus leverage this framework to systematically establish and implement responsible AI policies, systems and processes and develop accountability structures to create effective interl governance structures for trustworthy AI development. It can also be used to categorize a system’s risks and benefits (including third-party software and data) to map risk profiles, continuously measure and monitor them, and manage them by systematically triaging different risks and developing risk management protocols and practices.
Moreover, the Playbook provides voluntary suggestions – a snapshot of which are shown in the figure below – for organizations to leverage according to their specific industry use case or interests, adopting as many or as few as needed.
The AI RMF Roadmap outlines NIST's broader strategy for advancing the AI RMF, focusing on key activities that NIST can undertake in collaboration with private or public entities, or independently by organizations themselves. It signifies how NIST plans to maintain the AI RMF as a dynamic and relevant resource.
The main priorities identified in the Roadmap include:
The Crosswalks are a mapping guide that supports users on how adopting one risk framework can be used to meet the criteria of the other. In collaboration with the International Standards Organization (ISO), NIST has developed the Crosswalk AI RMF (1.0) and ISO/IEC FDIS23894 Information technology - Artificial intelligence - Guidance on risk management, which provides guidance on how organizations that develop, deploy or use AI can integrate risk management into their AI-related activities and functions. The agency has also developed a crosswalk on how the NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, EU AI Act, Executive Order 13960, and Blueprint for an AI Bill of Rights. These map the AI RMF’S trustworthiness characteristics to the broader principles outlined in the other guidelines.
Organizations or individuals can support NIST to develop additional crosswalks, which will become available in the forthcoming NIST Trustworthy and Responsible AI Center. Â
Finally, NIST offers tailored implementations of the AI RMF's functions and actions through dedicated Use-Case profiles, catering to various sectors and use-cases. These profiles illustrate how risk can be managed throughout the AI lifecycle or in specific sectors, technologies, or applications.
AI RMF temporal profiles offer guidance on how AI risk management activities should be structured, presenting two main types:
By comparing these profiles, organizations can identify gaps in fulfilling the AI RMF's functions and actions. This comparison allows them to prioritize and address these gaps effectively.
Additionally, the NIST AI RMF offers sector-specific use-case profiles. For example, a hiring profile would outline risk management activities for algorithms used in recruitment, while a fair housing profile would do the same for algorithms used in public housing schemes by government entities.
For applications like language models deployed across sectors, the AI RMF provides cross-sectoral profiles to address their multi-purpose nature.
With increasing AI adoption, the need to embed AI Governance solutions to effectively manage its risks and enhance its benefits will become a pressing necessity. As such, the NIST AI RMF provides a robust, yet flexible framework to truly operationalise trustworthy AI systems across use-cases and domains. At Holistic AI, we provide a scalable and seamless approach to adopt the NIST AI RMF across an enterprise’s AI use-cases. Schedule a demo with our experts to find out how Holistic AI’s Governance Platform can help you adopt the NIST AI RMF and embrace AI with confidence.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts