🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

US Federal Artificial Intelligence Risk Management Act of 2024 Introduced

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Jan 16, 2024
share this
US Federal Artificial Intelligence Risk Management Act of 2024 Introduced

On January 10, 2024, U.S. Congress members Ted W. Lieu, Zach Nunn, Don Beyer, and Marcus Molinaro announced the introduction of the Federal Artificial Intelligence Risk Management Act of 2024 (HR6936).

Previously introduced in 2023, the bipartisan bill required Federal Agencies in the US to use the Artificial Intelligence Risk Management Framework (AI RMF) developed by the National Institute of Standards and Technology (NIST). This would apply to any department, independent establishment, Government corporation, or other agency of the executive branch of the Federal US Government. There would, however, be an exception for national security systems.

What is the NIST AI Risk Management Framework?

As one of the leading voices in the development of AI standards, NIST published its AI RMF in January 2023 after conducting an extensive consensus-driven and open process that saw the submission of more than 400 formal comments from 240 organizations.

The Framework, accompanied by an AI playbook, was developed to support organizations to ‘prevent, detect, mitigate, and manage AI risks’ using a non-prescriptive, industry and use-case agnostic approach. It does this through four key processes:

  • Map - Organizations must understand what their AI system is trying to achieve and the benefits of its deployment compared to other methods
  • Measure - quantitative and qualitative methods should be employed to analyze and assess the risk of the AI system, along with how untrustworthy it is
  • Manage - Identified risks must be managed, with higher-risk systems prioritized and iterative risk monitoring to account for new and unforeseen risks
  • Govern - a risk management culture must be cultivated and supported by appropriate structures, policies, and processes.

Supporting guidance for RMF compliance

Within one year of the Bill's passing, the director of NIST would be required to issue guidance for agencies on incorporating the AI RMF including:

  • The standards, practices, and tools consistent with the framework and how they can be used to reduce the risk of AI
  • Cybersecurity strategies and tools to improve the security of AI systems
  • Standards tailored to the risks that could endanger people and the planet that must be met before an agency can procure AI from developers
  • Recommended training on the framework and guidelines for agencies in procuring AI
  • Minimum requirements for developing profiles for agency use of AI consistent with the framework
  • Profiles for framework use for an entity that is a small business concern

Where a profile is an implementation of AI risk management functions, categories, and subcategories for a specific setting or application based on the requirements, risk tolerance, and resources of the framework user.

The Director of the Office of Management and Budget would also be required to issue guidance requiring agencies to incorporate the framework and guidelines into their AI risk management efforts within 180 days of NIST’s guidelines being published.

The Act would also require the Director of NIST and Administrator of Federal Procurement Policy to provide draft contract language for each agency to use in the procurement of AI to require suppliers to adhere to the framework and provide access to necessary elements for evaluation and validation by the Director of NIST.

Study and reporting requirements

Within a year of the Act being enacted, the Comptroller General of the US is required to conduct a study on the impact of the framework on agency use of AI. The Director of the Office of Management and Budget would also be required to submit a report on agency implementation of and conformity to the framework to Congress.

Federal Regulations

Within a year of the Act's passing, the Federal Acquisition Regulatory Council would be required to develop regulations on requirements for the acquisition of AI to include risk-based compliance with the AI RMF and solicitation provisions and contract clauses that include references to these requirements.

Voluntary consensus standards

within 90 days of the Act being enacted, the Director of NIST will be required to complete a study on the existing and forthcoming voluntary consensus standards for the test, evaluation, verification, and validation of AI acquisitions. Following this, within 90 days of completing the study, the Director must consult relevant stakeholders to develop voluntary consensus standards for the test, evaluation, verification, and validation of AI acquisitions. These standards must then be used to develop methods and principles for tests, evaluations, verifications, and validations of AI acquisitions and the resources needed.

Other laws that require compliance with the AI RMF

The Federal Artificial Intelligence Risk Management Act is not the only law in the US that draws on NIST’s AI RMF; other laws at both the state and federal levels do too:

  • Virginia’s Artificial Intelligence Developer Act (HB747) creates operating standards for developers and deployers of AI and requires that risk management policies and programs developed in accordance with the law are at least as stringent as the AI RMF or another nationally or internationally recognized AI risk management framework
  • Vermont’s act relating to regulating developers and deployers of certain artificial intelligence systems (H0710) sets out requirements for high-risk AI systems and generative AI, requiring that risk management policies and programs developed in accordance with the law are at least as stringent as the AI RMF or another nationally or internationally recognized AI risk management framework
  • Vermont’s act relating to creating oversight and liability standards for developers and deployers of inherently dangerous artificial intelligence systems (H0711) imposes requirements for developers and deployers of high-risk AI systems, foundational models, and generative AI, where management risk management policies and programs developed in accordance with the law are at least as stringent as the AI RMF or another nationally or internationally recognized AI risk management framework
  • The Federal Farm Tech Act (HR6806) requires the establishment of a program for the certification of AI producing agricultural products based on the AI RMF.
  • The Federal No Robot Bosses Act (S2419) prohibits employers from relying exclusively on automated decision systems to make employment-related decisions and sets out requirements for pre-deployment testing and validation, including the compliance of the system with the AI RMF.

Furthermore, Biden’s executive order 14110 on Artificial Intelligence calls for the development of a companion resource for the AI RMF for generative AI, as well as the incorporation of the framework into safety and security guidelines for use by critical infrastructure owners and operators.

Risk management should be prioritized

There are increasing calls around the world for AI risk management to minimize AI harms and enable safe innovation. Not only may compliance with NIST’s AI Risk Management Framework soon be a legal requirement, but AI risk management can help to reduce the legal, reputational, and financial risks of AI and help you gain a competitive advantage by embracing AI confidently. Schedule a demo to find out how Holistic AI can help you apply the NIST AI Risk Management Framework.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo