The State of Healthcare AI Regulations in the US

March 11, 2024
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
The State of Healthcare AI Regulations in the US

Unlike other nations seeking to regulate AI, the US has taken a piecemeal approach with laws of a variety of scopes emerging at the state, federal, and local levels. Many US regulations have targeted AI use in specific industries such as HR and Insurance. Increasingly, AI in health care has also been targeted in a use case-specific way.

Indeed, while healthcare has historically been a regulation-heavy sector, governing bodies have found that AI can introduce novel risks that require new considerations and policies. In this blog post, we outline the US laws that are seeking to impose requirements on AI in healthcare, and what AI and GRC leaders can do to ensure compliance and trustworthiness in systems more generally.

Healthcare AI Regulations in the US

Proposed AI in healthcare laws

The majority of laws in the US targeting AI in healthcare are still in the proposal stage at both the Federal and State levels. With this said, there are a number of proposed laws, any of which could affect national health care companies. Additionally, in instances where AI laws have not passed in the United States, they have often been re-proposed in altered forms. In short, it is likely that at least components of the following proposals will at some juncture become legal requirements.

Federal law proposals

There are presently three proposals at the federal level that target AI in health care.

The Better Mental Health Care for Americans Act (S293) was Introduced to the Senate on March 22, 2023. This bill modifies programs and payments under Medicate, Medicaid and the Children’s Health Insurance Program.

Medicare Advantage (MA) organizations that impose nonquantitative treatment limitations on mental health or substance use disorder benefits must perform and document a comparative analysis on the design and application of these limits and make the factors used for evidence clear. This includes the use of artificial intelligence, machine learning, or other clinical decision-making technologies.

Additionally, section 1851(d)(4) of the Social Security Act is amended with a new subclause that requires information to be provided on denials made using AI, machine learning, or clinical decision-making technologies.

The Health Technology Act of 2023 (H.R.206) was introduced January 9, 2023 to establish that AI or machine learning technologies may be eligible to prescribe drugs.

Under this act, AI or machine learning technologies may qualify as a prescribing practitioner if they are authorized by state law to prescribe the specific drug and are approved, cleared, or authorized under federal provisions governing medical devices and products.

The Pandemic and All-Hazards Preparedness and Response Act (S2333) was introduced July 2023 to reauthorize certain programs under the Public Health Service Act.

Within 45 days of the Act being enacted, the Secretary of Health and Human Services must conduct a study with the National Academies of Sciences, Engineering, and Medicine to assess the potential vulnerabilities to health securities presented by the use or misuse of AI, including large language models. Such risks include chemical, biological, radiological, or nuclear threats.

Within 2 years of carrying out the study, a report must be submitted by the National Academies to the Committee on Health, Education, Labor, and Pensions of the Senate and the Committee on Energy and Commerce of the House of Representatives on the actions taken to mitigate and monitor risks to health security resulting from the misuse of AI.

State level law proposals

Lawmakers have also been acting at the State level, with most of the activity being centered in the Eastern US.

Massachusetts has put forth a bill titled “An Act Regulating the use of artificial intelligence  in providing mental health services” (H1974). This bill seeks to promote the safety and well-being of individuals seeking mental health treatment and responsible AI practices.

The Act requires that any licensed mental health professional that desires to use AI to provide mental health services must seek approval from the relevant licensing board. Those licensed to do so must disclose the use of AI to their patients and provide informed consent, as well as provide them with the option to receive treatment from a human instead.

Moreover, AI systems used in the provision of mental health services must be designed to prioritize patient safety and well-being. Mental health professionals must also continuously monitor the system for its safety and effectiveness.

The Illionois Safe Patients Limit Act (SB2795) was first introduced in 2023 and reintroduced in January 2024 for the new session, the Act seeks to set limits on the safe number of patients that may be assigned to a registered nurse in specified situations, as well as placing restrictions on the use of AI.

Hospitals, long-term acute care hospitals, ambulatory surgical treatment centers, and other health care facilities are prohibited from adopting a policy that substitutes independent nursing judgements from a registered nurse for decisions or recommendations made by algorithms, artificial intelligence, or machine learning.

Georgia’s Act to amend Article 1 of Chapter 24 of Title 33 of the Official Code of Georgia Annotated (HB887) was introduced January 2024 to amend the Georgia code with prohibitions for the use of AI-driven decision making for insurance coverage and healthcare.

A new section is added to the code to prohibit insurance coverage decisions from being solely based on AI or automated decision tools and requires that any coverage decisions that were made using AI be meaningfully reviewed and overridden where necessary.

The same requirements are set out for healthcare, where healthcare decisions should not be made solely on the basis of AI or automated decision tools, and decisions made with the support of these tools should be meaningfully reviewed by someone with the authority to override the decision. Moreover, the board will be required to adopt rules and regulations governing and establishing the standards necessary to implement these requirements.

Similar provisions are also set out for public assistance decisions.

Healthtech laws already in effect

While many of the laws targeting AI in healthcare are still in progress, a Virginia law that amends the Code of Virginia in relation to hospitals, nursing homes, and certified nursing facilities has been in effect since March 18, 2021.

Virginia HB2154 requires hospitals, nursing homes, and certified nursing facilities in the state to establish and implement policies concerning the permissible access to and use of intelligent personal assistants provided by a patient. Here, an intelligent personal assistant is an electronic device and software application that uses a combination of natural language processing and AI to assist users with basic tasks.

WHO guidelines on large multi-modal models in healthcare

In addition to the US laws seeking to govern AI in healthcare, the World Health Organization (WHO) has published guidelines on the ethics and governance of large multi-modal models (LMMs).

Announced on January 19, 2024, the guidelines focus on generative AI models that can accept one or more type of input data to generate a vast range of outputs. Although these models are predicted to have applications in healthcare, scientific research, public health, and drug development, the guidance notes several risks of using LMMs in healthcare, including the possibility of incorrect or inaccurate outputs, bias, privacy concerns, a lack of human interaction between patients and healthcare professionals, automation bias, and a lack of accountability for outputs.

As such, the guidelines highlight risks that should be addressed to make LMMs safer for healthcare use, delineating actions based on three phases: development, provision, and deployment. Within these phases, proposed actions are set out for governments, as well as developers and deployers in the appropriate phases.

Actions in the development stage include developer certification and training, developer data protection impact assessments, design aligned with consensus principles and ethical norms, government audits of systems during early development phases, and pre-certification programs.

In the provision phase, governments are encouraged to assign a regulatory agency to assess and approve LMMs for use in healthcare, require compliance with medical device regulations, and enact laws requiring impact assessments.

Finally, in the deployment phase, deployers are encouraged to avoid using LMMs in inappropriate ways and contexts, communicate risks and limitations, and ensure inclusive pricing, while governments are urged to mandate independent deployment audits and impact assessments, enforce requirements for technical documentation, and train healthcare workers on LMM how to use LMMS in decision-making and avoid bias.

The time for AI governance in health care is now

There have been numerous examples of AI resulting in potential harm when used in healthcare without the appropriate safeguards and training, such as inaccurate pediatric diagnoses by ChatGPT. While AI adoption is widespread, the use of efficiency-raising tools surfaces larger reputational, regulatory, and financial risks than ever before.

Health tech is increasingly being targeted by policymakers and regulators, with fair decisions and risk management requirements as recurring themes. Managing the risks of AI is an important task, but one that requires expert oversight – it doesn’t happen overnight. Schedule a call to find out how Holistic AI can help you manage the risks of AI in health care and maintain a competitive edge through AI that builds trust and reliable efficacy.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call