Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

In Conversation with NIST: AI Risk Management

Authored by
Ashyana-Jasmine Kachra
Policy Associate at Holistic AI
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Feb 3, 2023
read time
0
min read
share this
In Conversation with NIST: AI Risk Management

Key Takeaways

  • The National Institute of Standards and Technology (NIST), one of the leading voices in the development of AI standards, launched the first version of the Artificial Intelligence Risk Management Framework (AI RMF 1.0) on 26 January 2023.
  • According to NIST, ‘AI risks’ defined as “the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event.”
  • Underpinning the AI RMF is a focus on moving beyond computational metrics and instead focusing on the socio-technical context of the development, deployment and impact of AI systems.
  • The question of measurement is vital to the operationalization of the AI RMF and AI governance more broadly and should take a socio-technical approach informed by multiple perspectives
  • The end goal of the AI RMF is to promote the adoption of trustworthy AI, defined by NIST as high-performing AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent & accountable, and explainable & interpretable.

NIST overview

The National Institute of Standards and Technology (NIST), one of the leading voices in the development of artificial intelligence (AI) standards, launched the first version of the Artificial Intelligence Risk Management Framework (AI RMF 1.0) on 26 January 2023. Developed over the past 18 months, the AI RMF was created through a consensus-driven and open process, where more than 400 formal comments from 240 organizations helped shape the first iteration. Refining previous drafts, this is the official first version and is accompanied by an AI Playbook, which provides guidance to organizations on implementing the recommendations of the framework and will be updated periodically.

We sat down with NIST to discuss the AI RMF and learn about their vision for how it can be implemented. In this blog post, we discuss the key takeaways from the RMF, and insights shared with us.

Why do we need AI Risk Management?

According to NIST, ‘AI risks’ are defined as “the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event.” AI risks then contribute to the potential harms to people, and organizations resulting from the development and deployment of AI systems. These risks can stem from the data used to train and test the AI system, the system itself (i.e., the algorithmic model), the way the system is used and its resulting interaction with people (potential harms).

Recent examples of harm have resulted in legal action, include an ongoing lawsuit against insurance company  State Farm due to allegations that their automated claims processing has resulted in algorithmic bias against black homeowners. In another recent case, Louisiana’s authorities came under fire when the use of facial recognition technology led to a mistaken arrest and an innocent man was jailed for a week. The risks posed by AI should not be considered without nuance, highlighting the importance of operationalizing the RMF as a way to effectively to dive into the more complex areas of risk.

The AI RMF

In light of the several cases of AI harms seen in recent years, the purpose of the AI RMF is to support organizations to ‘prevent, detect, mitigate, and manage AI risks.’ It is designed to be non-prescriptive, industry and use-case agnostic, considering the vitality of context. It can also be used to determine what an organization's risk tolerance should be.

The end goal of the AI RMF is to promote the adoption of trustworthy AI, defined by NIST as high-performing AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent & accountable, secure & resilient, and explainable & interpretable.

Although the recommendations of the AI RMF are voluntary, they are aligned with the White House’s Blueprint for an AI Bill of Rights and many are based around organizational structure and procedures, meaning they do not present a significant financial burden for those that adopt them.

NIST’s work will influence future US legislation and global AI standards, as well as the activities of enterprises across the US. It will also work to promote a sense of public trust in evolving technologies such as AI.

AI RMF implementation

The AI RMF is based around four key functions, with governance at the heart of this:

  • Map - Organizations must understand what their AI system is trying to achieve and the benefits of this compared to the status quo. By having a solid understanding of an AI system’s business value, purpose, specific task, usage, and capabilities, organizations have helpful contextual information to decide whether to develop this system.
  • Measure - If organizations decide to continue to develop their AI systems, then quantitative and qualitative methods should be employed to analyze and assess the risk of the AI system, along with how untrustworthy it is. Metrics and methodologies must be developed, as well as the involvement of independent experts. These metrics can help assess an AI system on the following lines: fairness, privacy enhancing, transparency, explainable, safe, and reliability.
  • Manage - Identified risks must be managed, prioritizing higher-risk AI systems. Risk monitoring should be an iterative process, as post deployment monitoring is crucial, given that new and unforeseen risks can emerge.
  • Govern - Organizations must cultivate a risk management culture, including having the appropriate structures, policies, and processes. Risk management should be a priority for the c-suite.

NIST recommends that the AI RMF be applied at the beginning of the AI lifecycle and that diverse groups of internal and external stakeholders involved in (or affected by) the process of designing, developing, and deploying AI systems should be involved in the ongoing risk management efforts. It is expected that effective risk management will encourage people to understand the downstream risks and potential unintended consequences of these systems, especially how they may impact people, groups, and communities.

Analyzing measurement

The question of measurement is vital to the operationalization of the AI RMF and AI governance more broadly. From NIST’s events preceding the launch of the AI RMF, the following were identified as key questions and dilemmas regarding measurement:

  • The trickiness of what we mean by measurement
  • A lack of understanding of how AI risks can be measured or lack of standardized understanding of what the measurement of risk means
  • How risks can be measured using multiple perspectives
  • An integration of measurement metrics from both social science and machine learning, taking a socio-technical approach
  • We know what to measure, but what about what we don’t know or have not observed yet? How to account for unanticipated or unexpected harms that cannot be measured?

Moving beyond computational metrics

Underpinning the AI RMF is a focus on moving beyond computational metrics and instead focusing on the socio-technical context of the development, deployment and impact of AI systems. The AI RMF was designed to be used to help improve public trustworthiness of AI. As such, the AI RMF seeks to address negative impacts of AI, such as the perpetuation or societal biases, discrimination and inequality working towards a framework that can help AI preserve civil rights and liberties. By promoting a rights-affirming approach, NIST anticipates the likelihood and degree of harm will decrease.

Promoting a rights-affirming approach means moving away from just looking at data representation when dealing with issues to do with mitigating bias and discrimination. Instead, it is critical to understand both the social and technical aspects of your AI system. From what a system is trained on, to how it is trained/continues to learn is significant, especially from an interdisciplinary perspective. For example, when considering trustworthy or responsible AI, the onus lies on more than just developers and engineers. Bias and discrimination can be predicted and mitigated by pushing teams across disciplines to collaborate, such as social scientists with data scientists and investing into external assurance.

Getting started with AI risk management with Holistic AI

Holistic AI has pioneered the field of trustworthy AI and empowers enterprises to adopt and scale AI confidently. In line with the importance of a socio-technical approach highlighted by NIST, our team has the interdisciplinary expertise needed to identify and mitigate AI risks, with our approach being informed with AI-relevant policy.

Get in touch with a team member to find out Holistic AI Governance Platform can assist your organization in adopting the NIST AI Risk Management Framework.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo