NIST’s AI Risk Management Framework Explained

August 19, 2022
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
NIST’s AI Risk Management Framework Explained

Artificial Intelligence (AI) systems are increasingly being used to automate processes and decision-making in countless areas, from facial recognition entry systems to instant results for mortgage and insurance applications. Alongside this wave of innovation, the question of how to effectively manage the risks of AI is climbing up the policy and business agenda.

One of the leading voices in this discussion is The National Institute of Standards and Technology (NIST), the U.S. government agency which recently published a second draft of the AI Risk Management Framework (AI RMF). NIST’s work will influence future U.S. legislation and global AI standards, as well as the activities of enterprises across the U.S.

This blog post explains the key elements of NIST’s AI RMF and why AI risk management will become embedded as a core business function in the coming years.

What are ‘AI risks’?

According to NIST, ‘AI risks’ are the potential harms to people, organisations or systems resulting from the development and deployment of AI systems. AI creates novel risks that necessitate a bespoke risk management approach. Examples of harm range from automated recruitment tools discriminating against particular groups, to uncontrollable trading algorithms herding market behaviour and causing crashes. These risks can stem from the data used to train and test the AI system, the system itself (i.e., the algorithmic model), the way the system is used and its interaction with people.

What is the AI risk management framework?

NIST’s AI RMF is a set of high-level voluntary guidelines and recommendations that organisations can follow to assess and manage risks stemming from the use of AI.

A consultation on this latest draft of the AI RMF is open until September 29th 2022 and NIST is keen for external stakeholders to provide feedback. The official AI RMF 1.0 will be released in January 2023, and is intended to be a living document, to be updated as the technology (and risks) evolve. The AI RMF is complemented by the AI RMF Playbook, which provides more detailed guidance and resources to assist with implementation.

Who is the AI risk management framework for?

The purpose of the AI RMF is to support organisations to ‘prevent, detect, mitigate and manage AI risks’. It is intended for any organisation developing, commissioning or deploying AI systems and is designed to be non-prescriptive, as well as industry and use-case agnostic.

The end goal of the AI RMF is to promote the adoption of trustworthy AI, defined by NIST as high-performing AI systems which are safe, valid and reliable, fair, privacy-enhancing, transparent and accountable, and explainable and interpretable. Although NIST does not propose what an organisation’s risk tolerance level should be, the AI RMF can be used to determine this internally.

NIST recommends that the AI RFM be applied at the beginning of the AI lifecycle. Also, diverse groups of internal and external stakeholders involved in (or affected by) the process of designing, developing and deploying AI systems should be involved in the ongoing risk management efforts. It is expected that effective risk management will encourage people to understand the downstream risks and potential unintended consequences of these systems, especially how they may impact people, groups and communities.

Core elements of the AI risk management framework

There are four core elements of the AI RMF: i) Govern; ii) Map; iii) Measure; iv) Manage.

Govern

Organisations need to cultivate a culture of AI risk management and establish appropriate structures, policies and processes. The role of senior leadership is key. This includes managing legal and regulatory requirements, setting up reporting lines and accountability structures, having policies and processes for AI procurement, third-party suppliers and decommissioning, and delivering AI risk management training.

Map

Organisations should understand exactly what the AI system is trying to achieve and why this is beneficial relative to the status quo. Understanding the AI system’s business value, intended purpose, specific tasks, usage and capabilities, provides sufficient contextual information for organisations to decide whether or not to develop, commission or procure it.

Measure

If organisations decide to proceed, they should then use quantitative and qualitative techniques to analyse and assess the risks of the system and how trustworthy it is. This requires the development of bespoke metrics and methodologies and the involvement of independent experts. These metrics can then be used to assess whether the system is fair, privacy-enhancing, transparent, explainable, safe and reliable.

Manage

Where risks are identified, they should be managed, with priority given to higher-risk AI systems. Post-deployment monitoring is also crucial, as new and unforeseen risks can emerge as the system evolves and learns in the real-world.

NIST’s approach to AI risk management is holistic, which means that equal consideration is given to the human, organisational and technical sources of risk, as well as the role of these three dimensions in mitigating risk. For example, risks can stem from the implicit biases of developer teams (a human issue), inadequate due diligence checks on AI suppliers (an organisational issue), or how one AI model is fine-tuned for use in a completely different application (a technical issue).

AI Risk Management will become a core part of doing business

AI risk management will become an embedded and cross-cutting function in all major enterprises by the end of the decade, similar to privacy and cybersecurity. Not having adequate oversight and control over your AI systems will be seen as archaic and unacceptable. Regulators will require it, consumers will expect it and businesses will embrace it as a strategic necessity. As the inevitable scandals mount, effective AI risk management is likely to become a competitive differentiator for firms, just like privacy is today.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call