Elements of the NIST AI RMF: What you need to know

Authored by
Published on
Apr 24, 2024
read time
0
min read
share this
Elements of the NIST AI RMF: What you need to know

Mandated under the National Artificial intelligence Initiative Act of 2020, the National institute for Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF) is a voluntary risk management framework to be a resource for organizations who design, develop, deploy or use AI systems. The framework is intended to help organizations manage the risks of AI, promote trustworthy and responsible development and use of AI systems while being rights-preserving and non-sector specific.

The NIST AI RMF is operationalised through a combination of five tools or elements – namely the NIST Core, AI RMF Playbook, Roadmap, Crosswalks and Use-Case Profiles – which help establish the principles a trustworthy AI system should have, what actions should be taken to ensure trustworthiness in an AI system’s development and deployment lifecycles, as well as practical guidance on doing the same. In this blog post, we delve into what you need to know about each of these elements of NIST’s AI RMF.

Elements of the NIST AI RMF

Key Takeaways:

  • The AI RMF Core provides the foundation for trustworthy AI systems, with four key functions—Govern, Map, Measure, and Manage—to guide organizations in development and deployment across various domains.
  • The AI RMF Playbook offers actionable guidance for implementing the AI RMF's functions through detailed sub-actions. Organizations can adapt their approach based on their needs, leveraging voluntary suggestions provided.
  • The AI RMF Roadmap outlines NIST's strategy for advancing the AI RMF, focusing on collaboration and key activities to maintain its relevance.
  • The CAI RMF Crosswalks are a mapping guide that supports users on how adopting one risk framework can be used to meet the criteria of the other, such as those provided by the International Organization for Standardization.
  • Finally, the AI RMF Use-case profiles provide tailored implementations of the AI RMF's functions and actions, catering to various sectors and use-cases.

NIST AI RMF Core

The AI RMF Core is the foundation of the AI RMF. After providing a comprehensive overview of the attributes a trustworthy AI system should seek to have, the AI RMF provides four key functions, namely – Govern, Map, Measure, and Manage – that organizations can adopt to develop and deploy trustworthy AI systems across use-cases and domains. Broadly, these functions seek to ensure that requisite systems, processes and tools are developed across organisational contexts to cultivate and sustain a culture of risk management (Govern), risk profiles of AI systems are identified and contextualised to the use-cases they are deployed in (Map), they are effectively assessed, measured and tracked (Measure), and finally are prioritised and addressed proactively (Manage).

The AI RMF Core

NIST Playbook

The AI RMF Playbook serves as a practical companion to the AI RMF, offering actionable and adaptable guidance for organizations. Built on the AI RMF Core, it facilitates seamless implementation of the four key functions by providing a comprehensive list of sub-actions to fulfill each core function. An organization can thus leverage this framework to systematically establish and implement responsible AI policies, systems and processes and develop accountability structures to create effective interl governance structures for trustworthy AI development. It can also be used to categorize a system’s risks and benefits (including third-party software and data) to map risk profiles, continuously measure and monitor them, and manage them by systematically triaging different risks and developing risk management protocols and practices.

Moreover, the Playbook provides voluntary suggestions – a snapshot of which are shown in the figure below – for organizations to leverage according to their specific industry use case or interests, adopting as many or as few as needed.

The NIST AI RMF Actions Across Use-Cases

AI RMF Roadmap

The AI RMF Roadmap outlines NIST's broader strategy for advancing the AI RMF, focusing on key activities that NIST can undertake in collaboration with private or public entities, or independently by organizations themselves. It signifies how NIST plans to maintain the AI RMF as a dynamic and relevant resource.

The main priorities identified in the Roadmap include:

  • Aligning with international standards and creating crosswalks to related standards - As the coordinator for federal AI standards, NIST is tasked with working across government and industry stakeholders to track and participate in standards development and activities both domestically and globally. It will specifically focus on ISO/IEC 5338, ISO/IEC 38507, ISO/IEC 22989, ISO/IEC 24028, ISO/IEC DIS 42001, and ISO/IEC NP 42005.
  • Expanding TEVV (Test, Evaluation, Verification, and Validation) efforts - NIST will collaborate with the wider AI community to develop tools, benchmarks, testbeds, and standardized methodologies for evaluating risks in AI and system trustworthiness, including from a socio-technical lens.
  • Developing AI RMF 1.0 Profiles - AI RMF profiles, a type of case study, are the main way for organizations or individuals to share examples of how they’ve used the AI RMF. The Profiles can be intended for specific sector (such as hiring, criminal justice, lending), cross-sectoral (such as large language models, cloud-based services or acquisition), temporal (such as current vs desired state), or other topics.
  • Providing guidance on understanding the trade-offs and relationships among trustworthiness characteristics - Considering the many trade-offs that exist when managing trustworthy AI, NIST will investigate key areas in this topic and develop guidance for navigating trade-offs.
  • Establishing methods for measuring the effectiveness of the AI RMF - NIST will collaborate with the AI RMF user community, experts in program evaluation and other parties to develop methods to capture, evaluate, and share insights about the Framework’s application in real life.
  • Creating case studies to illustrate practical applications - Case studies will capture detailed uses of the AI RMF within a single organization or sector, context, or AI actor. These are similar to the AI RMF Profiles but will provide greater depth on organizational challenges and experiences using the AI RMF and how they were addressed, along with information about resources, timeframes, and AI RMF effectiveness.
  • Offering guidance on human factors and human-AI teaming in AI risk management - NIST will investigate how human-AI management can be optimized to reduce the likelihood of negative impacts or harms to individuals and wider communities. NIST will provide guidance based on the results of this research.
  • Providing insights into explainability and interpretability, and their application within the AI RMF - NIST will provide future guidance on how AI explainability and interpretability more directly relate to AI risk management.
  • Developing guidance on setting reasonable risk tolerances - NIST will collaborate with the wider community to identify approaches organizations can use to develop risk tolerances.
  • Producing tutorials and additional resources to promote multi-disciplinary and socio-technical approaches to AI risk management - To expand education and awareness among the broader AI community, NIST will support subject matter experts and other interested parties to develop educational materials to different audience types.

AI RMF Crosswalks

The Crosswalks are a mapping guide that supports users on how adopting one risk framework can be used to meet the criteria of the other. In collaboration with the International Standards Organization (ISO), NIST has developed the Crosswalk AI RMF (1.0) and ISO/IEC FDIS23894 Information technology - Artificial intelligence - Guidance on risk management, which provides guidance on how organizations that develop, deploy or use AI can integrate risk management into their AI-related activities and functions. The agency has also developed a crosswalk on how the NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, EU AI Act, Executive Order 13960, and Blueprint for an AI Bill of Rights. These map the AI RMF’S trustworthiness characteristics to the broader principles outlined in the other guidelines.

Organizations or individuals can support NIST to develop additional crosswalks, which will become available in the forthcoming NIST Trustworthy and Responsible AI Center.  

AI RMF Use-Case Profiles

Finally, NIST offers tailored implementations of the AI RMF's functions and actions through dedicated Use-Case profiles, catering to various sectors and use-cases. These profiles illustrate how risk can be managed throughout the AI lifecycle or in specific sectors, technologies, or applications.

AI RMF temporal profiles offer guidance on how AI risk management activities should be structured, presenting two main types:

  • Current Profiles - reflect the current state of AI management and its associated risks
  • Target Profiles - outline the desired outcomes for achieving specific AI risk management goals.

By comparing these profiles, organizations can identify gaps in fulfilling the AI RMF's functions and actions. This comparison allows them to prioritize and address these gaps effectively.

Additionally, the NIST AI RMF offers sector-specific use-case profiles. For example, a hiring profile would outline risk management activities for algorithms used in recruitment, while a fair housing profile would do the same for algorithms used in public housing schemes by government entities.

For applications like language models deployed across sectors, the AI RMF provides cross-sectoral profiles to address their multi-purpose nature.

Get NIST AI RMF ready with Holistic AI

With increasing AI adoption, the need to embed AI Governance solutions to effectively manage its risks and enhance its benefits will become a pressing necessity. As such, the NIST AI RMF provides a robust, yet flexible framework to truly operationalise trustworthy AI systems across use-cases and domains. At Holistic AI, we provide a scalable and seamless approach to adopt the NIST AI RMF across an enterprise’s AI use-cases. Schedule a demo with our experts to find out how Holistic AI’s Governance Platform can help you adopt the NIST AI RMF and embrace AI with confidence.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo