The National Institute of Standards and Technology (NIST), one of the leading voices in the development of AI standards, announced today the launch of its first version of the Artificial Intelligence Risk Management Framework (AI RMF). Developed over the past 18 months, the AI RMF was created through a consensus-driven and open process, where more than 400 formal comments from 240 organisations helped shape the first iteration. Refining previous drafts, this is the official first version, accompanied by an AI Playbook which will be reproduced every couple of months.
NIST’s work will influence future US legislation and global AI standards, as well as the activities of enterprises across the US. It will also work to promote a sense of public trust in evolving technologies such as AI.
Key takeaways
The purpose of the AI RMF is to support organisations to ‘prevent, detect, mitigate, and manage AI risks.’ It is designed to be non-prescriptive, industry and use-case agnostic, considering the vitality of context. It can also be used to determine what an organisation’s risk tolerance should be.
The end goal of the AI RMF is to promote the adoption of trustworthy AI, defined by NIST as high-performing AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent & accountable, and explainable & interpretable.
At the launch, Director of NIST Laurie Locasio highlights three key areas from the RMF:
The AI RMF is based around three key processes, with governance at the heart of this:
NIST recommends that the AI RFM be applied at the beginning of the AI lifecycle and that diverse groups of internal and external stakeholders involved in (or affected by) the process of designing, developing, and deploying AI systems should be involved in the ongoing risk management efforts. It is expected that effective risk management will encourage people to understand the downstream risks and potential unintended consequences of these systems, especially how they may impact people, groups, and communities.
This year, in line with Executive Order (EO) 13960, Promoting the Use of Trustworthy AI in the Federal Government 2020, NIST will also be responsible for re-evaluating and assessing any AI that has been deployed or is in use by federal agencies. This is to ensure consistency with the policies in EO 13960, the guiding principles of which are described to be in line with American values and applicable laws.
The order also requires that agencies, excluding national security and defence, make public an inventory of non-classified and non-sensitive current and planned Artificial Intelligence (AI) use cases. These use cases will be part of NIST’s evaluation, which will then shape the next iteration of frameworks which guide government uses of AI.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts