On January 10, 2024, U.S. Congress members Ted W. Lieu, Zach Nunn, Don Beyer, and Marcus Molinaro announced the introduction of the Federal Artificial Intelligence Risk Management Act of 2024 (HR6936).
Previously introduced in 2023, the bipartisan bill required Federal Agencies in the US to use the Artificial Intelligence Risk Management Framework (AI RMF) developed by the National Institute of Standards and Technology (NIST). This would apply to any department, independent establishment, Government corporation, or other agency of the executive branch of the Federal US Government. There would, however, be an exception for national security systems.
As one of the leading voices in the development of AI standards, NIST published its AI RMF in January 2023 after conducting an extensive consensus-driven and open process that saw the submission of more than 400 formal comments from 240 organizations.
The Framework, accompanied by an AI playbook, was developed to support organizations to ‘prevent, detect, mitigate, and manage AI risks’ using a non-prescriptive, industry and use-case agnostic approach. It does this through four key processes:
Within one year of the Bill's passing, the director of NIST would be required to issue guidance for agencies on incorporating the AI RMF including:
Where a profile is an implementation of AI risk management functions, categories, and subcategories for a specific setting or application based on the requirements, risk tolerance, and resources of the framework user.
The Director of the Office of Management and Budget would also be required to issue guidance requiring agencies to incorporate the framework and guidelines into their AI risk management efforts within 180 days of NIST’s guidelines being published.
The Act would also require the Director of NIST and Administrator of Federal Procurement Policy to provide draft contract language for each agency to use in the procurement of AI to require suppliers to adhere to the framework and provide access to necessary elements for evaluation and validation by the Director of NIST.
Within a year of the Act being enacted, the Comptroller General of the US is required to conduct a study on the impact of the framework on agency use of AI. The Director of the Office of Management and Budget would also be required to submit a report on agency implementation of and conformity to the framework to Congress.
Within a year of the Act's passing, the Federal Acquisition Regulatory Council would be required to develop regulations on requirements for the acquisition of AI to include risk-based compliance with the AI RMF and solicitation provisions and contract clauses that include references to these requirements.
within 90 days of the Act being enacted, the Director of NIST will be required to complete a study on the existing and forthcoming voluntary consensus standards for the test, evaluation, verification, and validation of AI acquisitions. Following this, within 90 days of completing the study, the Director must consult relevant stakeholders to develop voluntary consensus standards for the test, evaluation, verification, and validation of AI acquisitions. These standards must then be used to develop methods and principles for tests, evaluations, verifications, and validations of AI acquisitions and the resources needed.
The Federal Artificial Intelligence Risk Management Act is not the only law in the US that draws on NIST’s AI RMF; other laws at both the state and federal levels do too:
Furthermore, Biden’s executive order 14110 on Artificial Intelligence calls for the development of a companion resource for the AI RMF for generative AI, as well as the incorporation of the framework into safety and security guidelines for use by critical infrastructure owners and operators.
There are increasing calls around the world for AI risk management to minimize AI harms and enable safe innovation. Not only may compliance with NIST’s AI Risk Management Framework soon be a legal requirement, but AI risk management can help to reduce the legal, reputational, and financial risks of AI and help you gain a competitive advantage by embracing AI confidently. Schedule a demo to find out how Holistic AI can help you apply the NIST AI Risk Management Framework.
DISCLAIMER: This news article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts