🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
→
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
→
Learn more about EU AI Act

NIST Launches AI Risk Management Framework 1.0

Authored by
Published on
Jan 26, 2023
share this
NIST Launches AI Risk Management Framework 1.0

The National Institute of Standards and Technology (NIST), one of the leading voices in the development of AI standards, announced today the launch of its first version of the Artificial Intelligence Risk Management Framework (AI RMF). Developed over the past 18 months, the AI RMF was created through a consensus-driven and open process, where more than 400 formal comments from 240 organisations helped shape the first iteration. Refining previous drafts, this is the official first version, accompanied by an AI Playbook which will be reproduced every couple of months.

NIST’s work will influence future US legislation and global AI standards, as well as the activities of enterprises across the US. It will also work to promote a sense of public trust in evolving technologies such as AI.

Key takeaways

  • Director of NIST Laurie Locasio, stresses the importance of public trust in rapidly evolving technologies – the AI RMF is a means to building this trust.
  • NIST acknowledges the positive changes that AI can bring to infrastructure, industry, and scientific research, while also emphasising the risks AI can bring.
  • The AI RMF addresses the negative impact AI can have on biases & inequality, working towards a framework that can help AI preserve civil rights & liberties.
  • By promoting a rights-affirming approach, NIST anticipates the likelihood and degree of harm will decrease.
  • The AI RMF emphasises thinking more critically about the context and use of AI; the framework is not meant to be a one-size-fits-all approach but rather encourages flexibility for innovation.
  • At the launch, Director of NIST Laurie Locasio highlighted three key areas from the RMF: flexibility, measurement and trustworthiness.
  • NIST is promoting a feedback loop, looking to periodically hear from organisations that employ their framework in order to establish global gold standards in line with EU regulation.

The AI RMF

The purpose of the AI RMF is to support organisations to ‘prevent, detect, mitigate, and manage AI risks.’ It is designed to be non-prescriptive, industry and use-case agnostic, considering the vitality of context. It can also be used to determine what an organisation’s risk tolerance should be.

The end goal of the AI RMF is to promote the adoption of trustworthy AI, defined by NIST as high-performing AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent & accountable, and explainable & interpretable.

At the launch, Director of NIST Laurie Locasio highlights three key areas from the RMF:

  • Flexibility: to promote innovation and to acknowledge that trade-offs are always involved as not everything applies equally in all scenarios
  • Measurment: if you cannot measure, you cannot improve
  • Trustworthiness: there should not be outputs where one group is favoured more than another and data used to train AI models should protect sensitive information ensuring they cannot be extracted through other means

Three-pronged approach

Three-Pronged Approach

The AI RMF is based around three key processes, with governance at the heart of this:

  • Map - Organisations must understand what their AI system is trying to achieve and the benefits of this compared to the status quo. By having a solid understanding of an AI system’s business value, purpose, specific task, usage, and capabilities, organisations have helpful contextual information to decide whether to develop this system.
  • Measure - If organisations decide to continue to develop their AI systems, then quantitative and qualitative methods should be employed to analyse and assess the risk of the AI system, along with how untrustworthy it is. Metrics and methodologies must be developed, as well as the involvement of independent experts. These metrics can help assess an AI system on the following lines: fairness, privacy enhancing, transparency, explainable, safe, and reliability.
  • Manage - Identified risks must be managed, prioritising higher-risk AI systems. Risk monitoring should be an iterative process, as post deployment monitoring is crucial, given that new and unforeseen risks can emerge.
  • Govern - Organisations must cultivate a risk management culture, including having the appropriate structures, policies, and processes. Risk management should be a priority for the c-suite.

Applying the AI RFM

NIST recommends that the AI RFM be applied at the beginning of the AI lifecycle and that diverse groups of internal and external stakeholders involved in (or affected by) the process of designing, developing, and deploying AI systems should be involved in the ongoing risk management efforts. It is expected that effective risk management will encourage people to understand the downstream risks and potential unintended consequences of these systems, especially how they may impact people, groups, and communities.

What is next?

This year, in line with Executive Order (EO) 13960, Promoting the Use of Trustworthy AI in the Federal Government 2020, NIST will also be responsible for re-evaluating and assessing any AI that has been deployed or is in use by federal agencies. This is to ensure consistency with the policies in EO 13960, the guiding principles of which are described to be in line with American values and applicable laws.

The order also requires that agencies, excluding national security and defence, make public an inventory of non-classified and non-sensitive current and planned Artificial Intelligence (AI) use cases. These use cases will be part of NIST’s evaluation, which will then shape the next iteration of frameworks which guide government uses of AI.

Download our comments here

DISCLAIMER: This news article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo