On Monday 29 April 2024, the National Institute of Standards and Technology (NIST) published a draft AI RMF Generative AI Profile. Designed to be a companion piece to its AI Risk Management Framework (AI RMF), the Generative AI Use Case Profile guides organizations to identify and response to risks posed by generative AI. The Profiles were released alongside three other draft documents focused on generative AI (GAI), including Secure Software Development Practices for Gen AI, and Dual Use Foundation ModelsReducing Risks Posed by Synthetic Content, and a Plan for Global Engagement on AI Standards. These guidelines are part of NIST’s mandate under President Biden’s Executive Order on AI . Like the AI RMF, all draft documents are voluntary and cross-sectoral. In this blog post, we outline what you need to know about NIST’s Generative AI Use Case Profiles.
The AI RMF Generative AI Profile serves as both a use-case and cross-sectoral profile of the AI RMF 1.0. Use-case profiles offer insights into implementing the AI RMF functions for specific applications, while cross-sectoral profiles govern risks associated with common activities across sectors. By delineating risks and actions, this framework provides a roadmap for managing GAI-related challenges across various stages of the AI lifecycle.
Similar to the AI RMF process, the Generative AI Profile is open for comments until 2 June 2024, giving the GAI community an opportunity to shape the final framework. Identifying Risks According to the Generative AI Use Case Profiles.
The draft Use Case Profiles highlight a spectrum of risks unique to GAI, ranging from the proliferation of dangerous content to environmental impacts. These risks include:

NIST provides various proactive measures organizations can take to mitigate the risks of GAI. These actions, categorized under the AI RMF Core functions—Govern, Map, Measure, and Manage—provide a structured approach to risk management:

It's essential to recognize that not all actions may be relevant to every organization. The framework emphasizes the importance of tailoring risk management strategies according to an organization’s unique context and situation. Consequently, organizations should assess their risk tolerance and resource capabilities to prioritize actions effectively. Nonetheless, some actions –such as many of those under the Govern function – are considered “foundational,” meaning that they should be treated as fundamental tasks for GAI risk management. This aligns with the overall recommendation of the AI RMF Core, which emphasizes the Govern function as the bedrock mechanism of the entire resource.
For example, let’s look at the function Govern 1.1 which is considered foundational:
While these actions are voluntary under the NIST generative AI guidelines, organizations may still be compelled to follow these actions in certain jurisdictions, where for example, they are required to disclose GAI to end users.
Contrastingly, for Manage 4.2 – which is not considered foundational – NIST recommends certain actions regarding organizational practice, a topic for which legal requirements are not typically applied:
Although voluntary, implementing an AI risk management framework can increase trust and increase your ROI by ensuring your AI systems perform as expected. Holistic AI’s Governance Platform is a 360 solution for AI trust, risk, security, and compliance and can help you get ahead of evolving AI standards. Schedule a demo to find out how we can help you adopt AI with confidence.