Artificial intelligence (AI) is increasingly being used in critical applications such as healthcare. From diagnostics and remote patient monitoring to screening medical imaging for abnormalities, the applications of AI in healthcare are vast and can be used to streamline processes, help professionals manage their workloads, and provide patients with information. However, the use of these systems can also have direct implications on the quality of patient care and consequently their health.
Although healthcare is already a heavily regulated sector, with healthcare professionals required to follow strict rules and regulations to uphold patient care and wellbeing standards, the use of AI in healthcare can present novel risks that can result in far-reaching harms if they go unchecked.
As such, policymakers around the world are moving to regulate AI in critical applications such as healthcare, although different jurisdictions are taking different approaches.
In this blog post, we explore how AI in healthcare would be regulated under the EU AI Act, California Assembly Bill 331, The DC Stop Discrimination by Algorithms Act, and the Algorithmic Accountability Act.
The EU AI Act is a horizontal piece of legislation that seeks to establish a global gold standard for regulating AI with its risk-based approach under which obligations are proportionate to the risk posed by the system. Here, systems classed as having an unacceptable level of risk are prohibited from being made available on the EU market, those with a high level of risk have stringent requirements for their use to be permitted in the EU, and those posing limited risk will have transparency requirements, while those with minimal risk will only be subject to voluntary frameworks.
The main use cases considered to pose a significant risk to health, safety, and fundamental rights and therefore categorised as high-risk under the AI Act are those listed in Annex III. While systems used in healthcare are not explicitly named as a high-risk application in Annex III, elements of healthcare are covered. This includes AI systems used to make decisions about eligibility for health and life insurance, those used to evaluate and classify emergency calls and coordinate the dispatch of emergency services including emergency healthcare patient triage systems, and those used by public authorities to evaluate eligibility for public assistance benefits and services including healthcare services.
Additionally, AI systems that are safety components of products or are themselves products that fall within the scope of particular EU regulations in Annex II are considered as high-risk if they undergo a third-party conformity assessment pursuant to the relevant harmonization law. This includes medical devices that fall under the Medical Devices Regulation (EU) 2017/745 (MDR) and In Vitro Diagnostic Medical Devices Regulation (EU) 2017/746 (IVDR).
Systems that are considered high-risk must comply with the obligations set out in articles 9-15, including the establishment of a risk management system. While standards are still being developed for these obligations, Recital 27 notes that requirements for high-risk systems should take into account sectoral legislation including the MDR and IVDR, meaning that there could be specific provisions for AI systems used in healthcare that fall under these laws.
The AI Act does not only outline restrictions and obligations for the use of AI-driven healthcare systems. Adding to these obligations, Recital 28 highlights the need for diagnostics systems and those used to support healthcare decisions to be reliable and accurate.
To support this, Recital 45 asserts that the European health data space will provide access to health data to train algorithms in a privacy-preserving, secure, transparent, and timely manner that is supplemented by institutional governance.
Additionally, Article 54 prohibits the processing of personal data in regulatory sandboxes unless they are developed to safeguard public interest in specified areas including public health in terms of activities such as disease detection, diagnosis prevention, control, and treatment, providing special permissions for the use of such data in the interests of upholding healthcare standards.
Elsewhere, in the US, horizontal legislation that seeks to regulate multiple use cases is less mature and is consequently easier to navigate. First introduced in 2019 and then reintroduced in 2021, the Algorithmic Accountability Act sought to require impact assessments of algorithms used to make critical decisions for issues such as bias, privacy, and mechanisms for ongoing testing and monitoring. This includes systems used to make decisions that have a legal, material or otherwise significant effect on a consumer’s life in terms of their access to or the cost, terms, or availability of services such as healthcare. Mental healthcare, dental, and vision would be included in this.
Although the Algorithmic Accountability Act died in the 117th Congress, if it had been passed, it would have fallen short of the EU AI Act by not considering other relevant legislation governing particular sectors, such as healthcare, as the AI Act does. As such, this could have resulted in conflicting or duplicate obligations, demonstrating the importance of taking other regulations into account in heavily regulated sectors such as healthcare and medical devices. It also did not outline any provisions to support innovation such as regulatory sandboxes or access to data to facilitate the development of cutting edge, but still safe, AI systems for use in healthcare.
Like the Algorithmic Accountability Act, DC’s Stop Discrimination by Algorithms Act is a horizontal piece of legislation that has been introduced twice – first in 2021 and again in 2023. However, with different priorities to the AI Act and Algorithmic Accountability Act, the DC bill focuses on the prohibition of discriminatory decisions made by algorithms, including AI, for what are termed important life opportunities. These include eligibility decisions about education, employment, housing, places of public accommodation, and insurance.
Presumably, insurance would include health and life insurance, but healthcare is not explicitly referred to in the text, although it is important to note that this would still be covered under relevant state and federal laws.
Finally, California Assembly Bill 331 was introduced in January 2023 to require developers and deployers of AI tools to conduct impact assessments as well as require deployers to notify users of the use of the tool. Developers would also be required to provide deployers with documentation about the use of the tool and its limitations and either developers or deployers would be required to establish maintain a governance program with reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination from the use of automated decision tools within the scope of the legislation.
Like the EU AI Act, AB331 outlines several categories of systems that would be covered, including health care or health insurance, including mental health care, dental, or vision. Interestingly, the assembly bill also considers systems used in reproductive health, going beyond any of the previous regulations. However, like the Algorithmic Accountability Act, Assembly Bill 331 does not consider how the bill would interact with other relevant laws already regulating healthcare and other critical applications of AI.
Healthcare is heavily regulated by sector-specific regulation that seeks to preserve patient care and prevent harm. While the use of algorithms and AI within healthcare can pose novel risks that would benefit from governance specifically designed to address them, this may not be something that could adequately be achieved by horizontal legislation since there are key differences between risk management in healthcare practices and other more typical business practices.
For example, the use of protected attributes to make decisions is not permitted in other practices such as employment decisions. But given differences in disease symptomology and presentation between different demographic groups, protected attributes may need to actively be considered when making decisions to ensure patients are treated in the most effective way.
Therefore, it is important that there is appropriate and specific AI regulation in healthcare to prevent harm while also allowing appropriate considerations to be made on patient demographics. Regulation in this still space is still emerging, but it is clear that it will be vital to ensure that healthcare technologies are safe, effective, and fair.
Consequently, ensuring robust AI governance, managing associated risks, and adhering to compliance in this domain is paramount. At Holistic AI, we are world experts in AI Governance, Risk, and Compliance. Schedule a call to find out more about how we can help your organisation.
The EU's AI Act as well as proposed bills in the United States such as California AB 331, the Algorithmic Accountability Act, and DC's Stop Discrimination by Algorithms Act all take different approaches to AI regulation in healthcare. The AI Act does not explicitly categorise healthcare as high-risk, but elements of healthcare are covered. Healthcare would also be encompassed by horizontal legislation like the Algorithmic Accountability Act and the DC Stop Discrimination by Algorithms Act. Impact assessments and development/deployment governance programmes for AI systems used in healthcare would meanwhile be required under California AB 331.
As more advanced AI systems are developed for diverse healthcare applications, horizontal regulations covering multiple sectors may not comprehensively address the unique risks in healthcare when taken in isolation. Many experts therefore argue that tailored governance is needed to manage patient care standards amid rapid innovation.
Regulators should work closely with medical AI experts to develop adaptive regulations that encourage responsible innovation while protecting patients. Consultation will enable regulations that are nimble but address sector-specific ethical concerns, upholding healthcare standards as technology progresses.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts