🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

NIST AI RMF Core: What You Need to Know

Authored by
Published on
Apr 19, 2024
read time
0
min read
share this
NIST AI RMF Core: What You Need to Know

The  National Institute for Standards and Technology (NIST) is a non-regulatory agency under the United States Department of Commerce. With an annual budget of $1 billion, NIST conducts measurement science, enables standards, and operates advanced laboratories. While NIST is responsible for the development of standards for technology in general, NIST is increasingly focusing their efforts on AI.

On 26 January 2023, NIST published its AI Risk Management Framework (AI RMF). The publication of the AI RMF 1.0 followed an 18-month consultation process with private and public sector groups, in which 400 formal comments from 240 entities on previous drafts influenced the final framework. The framework was developed per instructions under the National Artificial Intelligence Initiative Act of 2020, which required NIST to develop a voluntary risk management framework as a resource for organizations who design, develop, deploy or use AI systems. Intended to help organizations manage the risks of AI, promote trustworthy and responsible development and use of AI systems while being rights-preserving and non-sector specific, the NIST AI RMF is accompanied by the NIST AI RMF Playbook, an AI RMF Roadmap,AI RMF Crosswalk and various Perspectives.

Within the framework itself, the first part lays out the broad principles of AI risk management as NIST defines it, while the second half – the AI RMF Core (AIRC) – presents four specific functions – Govern, Map, Measure, and Manage – that organizations can implement to address the risks of AI systems in practice. In this blog post, we outline each of the key functions as well as key themes emphasized throughout the AIRC.

Key Functions of the AI RMF

Key Functions of the AI RMF

Govern

Govern is the bedrock function of the AI RMF Core, without which the other functions cannot succeed. It addresses how to cultivate and implement a culture of risk management within organizations designing, developing, deploying, evaluating or acquiring AI systems.

A main challenge for organizations is knowing how to operationalize risk management principles. The Govern function connects such organizational principles with the technical aspects of AI system design and development. NIST recommends that organizations establish processes and policies under the Govern function prior to  Map, Measure, and Manage.

To support this, Govern offers specific processes, documents, and organizational schemes and covers areas such as legal compliance, trustworthy AI integration, workforce diversity considerations, engagement with external stakeholders, and addressing risks from third-party software and data.

Map

The Map function of the AI RMF Core prompts Framework users to survey the context in which a given AI system is working and identify any potential context-related risks. The lifecycle of an AI system presents a complex and iterative process, which is often revisited many times as the system changes and develops. Because of this, the best intentions in one phase of the system leads to unintended consequences in the other. For example, the recent controversy over Google Gemini’s historically inaccurate images demonstrates how the AI developer may not have adequately mapped their AI model to a wider context.

The Map function aims to support the navigation of contextual factors relating to AI systems and is especially designed for framework users to identify risks and other broad, contextual factors. Given the importance of context, it is critical that Framework users incorporate as many different perspectives on the AI system as possible. The internal team, external collaborators, end users, and any potentially impacted users are some of the many perspectives that should be accounted for during this process.

Adapted from the OECD (2022) Framework for the Classification of AI Systems – OECD Digital Economy Papers.

Adapted from theOECD (2022) Framework for the Classification of AI Systems – OECD Digital Economy Papers.

Moreover, outcomes in the  Map phase form the basis for the Measure and Manage phase. After completing the Map function, framework users should have the contextual knowledge about a given AI system to inform an initial go or no-go decision about whether to design, develop, or deploy an AI system.

If a decision is made to proceed, organizations should utilize the Measure and Manage functions along with policies and procedures put into place from the Govern function to assist in AI risk management efforts.

Measure

The Measure function of the AIRC involves quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze and evaluate AI risk and their related impacts. Information from the Map function will inform this function and its result will inform the Manage function.

Measurements of AI systems must be documented and formally reported, and include:

  • Documenting aspects of a systems’ functionality and trustworthiness
  • Tracking metrics for trustworthy characteristics, social impact, and human-AI configurations
  • Rigorous software testing and performance assessment methodologies

These should include measurements of uncertainty and comparisons to performance benchmarks

Manage

Once organizations have gathered and measured all necessary information about an AI system, they can respond to the identified risks. For the Manage function, the AI RMFC offers advice on how users can prioritize these risks and act upon them based on a projected impact. It specifically details how organizations can allocate risk resources (such as any domain expertise needed in the Measure function) to the mapped and measured risks on a regular basis. It also covers communication and incident reporting to affected communities.

The Manage function is where all prior functions coalesce. The contextual information learned during the Map phase is used in this function to decrease the likelihood of system failures and consequences. Systematic documentation practices identified in Govern, and used in Map and Measure, enhance AI risk management and increase transparency during Manage. As with the other functions, Framework users should continue to apply the Manage function as the AI system, the organization, the context and needs change over time.

Key Themes of the AI RMF Core

Across the four functions of the AI RMF Core, we identify four key themes: adaptability, accountability, diversity, and integration.

Adaptability

Intentionally designed to be non-sector specific and use-case agnostic, the AI RMF Core recommends that framework users apply the four functions as best suits their needs. The framework is adaptable to all types and size of organizations to accommodate for the range of resource and capabilities different organizations have. It specifically notes that “functions may be performed in any order across the AI lifecycle as deemed to add value by a user of the framework.”

Accountability

Throughout the AI RMF, NIST emphasizes that trustworthy AI depends on accountability. Whether it’s during the Govern function, which hinges on senior leadership, or the Manage function, which determines how risks are prioritized, accountability is threaded through risk management. The AI RMFC describes various ways an organization enables accountability.

The Measure function is particularly critical to ensuring accountability in the governance of AI risk management. Having the ability to objectively trace and pinpoint the decision that resulted in risk exposure will allow the organization to correct and change its system. Measurements should be carried out in an open and transparent process. An independent review of such measurements can also improve effectiveness of testing, mitigate internal biases, reduce potential conflicts of interest. 

As noted in the Govern function, measuring AI systems can ensure management decisions are rooted in evidence of the efficacy of the system. Accordingly, framework users should follow scientific, legal, and ethical norms when carrying out any measurements.

Diversity

Including a wide range of perspectives and actors across the AI lifecycle is key to identifying and managing risks and their impacts. The AI RMF is designed to be used by any actor involved in an AI system and NIST notes that doing so will reduce any unintended risks. For example, some AI-powered hiring tools are now case studies on how AI providers and users did not first include diverse perspectives.

Given that there have been many cases of automated hiring tools disproportionately favoring one demographic group over others, more diverse perspectives are now involved in the making of these systems. Diversity is particularly important for the Govern function, which states that workforce diversity, equity, inclusion and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.

Iteration

Just as powerful large language models (LLMs) need fine-tuning or updating with new training data, so too does risk management. No matter the function, the AI RMF recommends the process always be iterative, with cross-referencing between functions as needed.

Organizations and their personnel are not static, and their risk practices must evolve with changes in their context, needs, or objectives. For example, NIST notes that some methods to measure risk in AI systems are still developing, and once they are verified, organizations should use updated and better measurement methods to test their systems. Developing risk management to be iterative will allow Framework users to better prepared for more powerful and sophisticated AI systems.

Prioritize AI Governance

Although voluntary, implementing an AI risk management framework can increase trust and increase your ROI by ensuring your AI systems perform as expected. Holistic AI’s Governance Platform is a 360 solution for AI trust, risk, security, and compliance and can help you get ahead of evolving AI standards. Schedule a demo to find out how we can help you adopt AI with confidence.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo