The Artificial Intelligence Risk Management Framework (AI RMF) Playbook serves as a practical companion to the AI RMF, offering actionable and adaptable guidance for organizations. In this blog post, we’ll give an in-depth overview of the Playbook, including the first steps organizations can take to implement it. For an introductory piece on the AI RMF, check out the Core Elements of the NIST AI Risk Management Framework.
As with the AI RMF Core, the Playbook first addresses the Govern function, which broadly recommends that a “culture of risk management is cultivated and present” as the first step towards successful AI risk management. It is the bedrock function of the AI RMF Core, without which the other functions cannot succeed, and therefore actions related to it should come before Map, Measure, and Manage.
The Govern function generally offers two types of recommendations: those that address broad organizational practices and cultural norms and those specifically targeted the AI system. Each subfunction within Govern is accompanied by a set of suggested actions, along with recommended transparency and documentation practices. For instance, subfunction Govern 1.6 – which relates to the AI system– can be broken down as follows:
An AI system inventory will include system documentation, incident response plans, data dictionaries, links to implementation software or source code, names and contact information for relevant AI actors. It is an important practice for an organization to maintain because they provide a holistic view of organizational AI assets. AI inventories can also quickly answer important questions, such as when a given model was last updated.
Other subfunctions within the Govern section address broad personnel practices within an organization. For example, Govern 4.1 recommends:
NIST emphasizes that a culture of risk management is critical to effectively triaging most AI-related risks. In some industries, organization implement three or more ‘lines of defense’ in which separate teams are held accountable for different aspects of the system lifecycle, such as development, risk management, and auditing. This approach may be more difficult for smaller organizations, who can alternatively implement the ‘effective challenge’ which is a culture-based practice that encourages critical thinking and questioning of important design and implementation decisions by experts with authority and stature to make such changes.
NIST also recommends red teaming as another approach, which consists of adversarial testing of AI systems under stress conditions to seek out AI system failure modes or vulnerabilities. Typically, these consist of external expertise or personnel independent from the internal team.
The Map function instructs AI RMF users to survey the context in which a given AI system is working and identify any potential context-related risks.
The Map function is geared towards aiding users in navigating contextual factors associated with AI systems, specifically enabling them to pinpoint risks and broader contextual elements. Recognizing the significance of context, it's crucial for Framework users to integrate diverse perspectives on the AI system. This includes input from internal teams, external collaborators, end users, and any potentially affected individuals, among others, to ensure a comprehensive understanding during this phase.
Like the Govern function, Map also considers the dual aspects of an AI system: it makes recommendations for the organization itself as well as specifically for the AI system. Map 1.1. addresses the organizational practices that help achieve the Map function:
Map 1.1 is especially concerned with how and how an AI system is used (known as ‘context mapping’). For this subfunction, organizations should be cognizant of the specific set or types of users along with their expectations; potential positive and negative impacts of systems use to individuals, communities, organizations, society and the planet. NIST notes that even highly accurate and optimized systems can cause harm. As such, they recommend that discussion and consideration of non-AI or non-technology alternatives in some cases. In the Map 4 function, NIST makes recommendations regarding the AI system, encouraging that organizations map the risks and benefits for all components of the system. Within that function, Map 4.2 addresses third-party technologies:
Map 4.2 may be especially helpful for organizations with AI systems that use open-source or otherwise freely available, third-party technologies, which may have privacy, bias or security risks.
The Measure function involves assessing, analyzing, or tracking the risks first identified in the Govern and Map functions. It includes quantitative, qualitative, or mixed-method tools techniques, and methodologies to analyze and evaluate AI risk and their related impacts. The Map measure is critical to informing this function and the result will inform the Manage function. Map function 1.1 addresses these metrics:
NIST notes that AI technologies present new failure modes compared to traditional software systems due to their reliance on training data and methods which directly relate to data quality. The AI RMF consistently emphasizes the sociotechnical nature inherent to AI systems, meaning that risks often emerge from the interplay between the technical aspects of the system and who operates and the context in which is it operated.
Measure 2 outlines how AI systems are evaluated for trustworthy characteristics, and Measure 2.11 specifically addresses fairness and bias:
Fairness includes concerns for equality and equity by addressing bias and discrimination, for example. NIST separates bias into three categories: systemic, computational and statistical, and human-cognitive:
Once organizations have gathered and measured all necessary information about an AI system, they can respond to identified risks. The Manage function, within the AI RMFC, advises users on how to prioritize and address these risks based on their projected impact. It offers detailed guidance on resource allocation for managing Mapped and Measured risks regularly, including any necessary domain expertise acquired during the Measure function. Additionally, it covers aspects such as communication and incident reporting to affected communities.
In the Manage function, all preceding functions converge. The contextual insights acquired during the Map phase are utilized to mitigate the likelihood of system failures and their consequences. The systematic documentation practices established in Govern, and utilized throughout Map and Measure, bolster AI risk management and enhance transparency during Manage. Just like the other functions, Framework users should continuously apply the Manage function as the AI system, the organization, and the contextual needs evolve over time. In Manage subfunction 1.2, the Playbook provides instructions for actions on an organizational scale:
NIST defines risk as the “composite measure of an event’s probability of occurring and the magnitude (or degree) of the consequences of the corresponding events.” It notes that the impacts, or consequences, of AI systems can be positive, negative, or both and can result in opportunities or risks. Organizational risk tolerance plays an important role in the Manage function as it determines how organizations choose to respond to the identified risks found in the Map function. Risk tolerance is typical informed by several internal and external factors, including existing industry practices, organizational values, and legal or regulatory requirements. In Manage 3.2, NIST addresses responses for AI models in particular:
In AI development, transfer learning is common, where pre-trained models are adapted for related applications. Developers often utilize third-party pre-trained models for tasks like image classification, language prediction, and entity recognition due to limited resources. These models are trained on large datasets and require significant computational resources. However, their use can pose challenges in anticipating negative outcomes, especially without proper documentation or transparency tools, hindering root cause analyses during deployment.
Although voluntary, implementing an AI risk management framework can increase trust and increase your ROI by ensuring your AI systems perform as expected. Holistic AI’s Governance Platform is a 360 solution for AI trust, risk, security, and compliance and can help you get ahead of evolving AI standards. Schedule a demo to find out how we can help you adopt AI with confidence.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts