🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
→
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
→
Learn more about EU AI Act

Colorado Passes Law Enacting Consumer Protections for AI

Authored by
Nikitha Anand
Policy Analyst at Holistic AI
Published on
May 9, 2024
share this
Colorado Passes Law Enacting Consumer Protections for AI

On 8 May 2024, the American state of Colorado passed SB24-205, enacting protections for consumers in their interactions with AI systems. The bill was first introduced on 10 April of this year, making this a remarkably quick passage of the law. The law represents the rapid escalation of efforts, both in the country and globally, to tackle AI risks and harms by mandating regulation of AI and  comes on the heels of several recently released pieces of guidance and proposed laws from the US federal government for sweeping legislation of AI. The passing of SB205 also builds on Colorado’s existing progress towards AI regulation with SB21-169, which protects  consumers from unfair discrimination resulting from the use of external customer data and  algorithms in insurance, passed mid-2023.

Key provisions of Colorado’s Consumer Protections for Artificial Intelligence

SB205 focuses specifically on the regulation of high-risk AI systems and mandates that developers and deployers of such systems must take reasonable precautions to prevent algorithmic discrimination within those systems. Here, algorithmic discrimination is defined as unlawful differential treatment or impact that disfavours individuals or groups based on protected attributes, while high-risk systems are those used to make consequential decisions relating to education, employment, financial services, government services, healthcare, housing, insurance, or legal services.

What are the key provisions for developers under SB205?

It can be assumed that a developer has exercised reasonable care if they adhere to specific provisions outlined in the bill including:

  • Providing a deployer of the high-risk system a statement that discloses specified information about the system.
  • Providing a deployer of the high-risk system with the necessary information and documentation to conduct an impact assessment of the system.
  • Issuing a publicly available statement that outlines the types of high-risk systems currently accessible by deployers that are developed or intentionally and substantially modified by the developer. The statement should detail how developers are managing any known or reasonably foreseeable risks of algorithmic discrimination stemming from the development or modification of each of these high-risk systems.
  • Disclose to the attorney general and known deployers of the high-risk system about any known or reasonably foreseeable risks of algorithmic discrimination that the high-risk system has caused or is likely to cause, within 90 days of its discovery or upon receiving a credible report from a deployer indicating such risk.

What are the requirements for deployers under Colorado’s Consumer Protections for AI?

Similarly to developers, it can be assumed that a deployer has used reasonable care if they have complied with specified provisions in the bill including:

  • Implementing risk management protocols for the high-risk system.
  • Finishing an impact assessment of the high-risk system.
  • Conducting annual reviews of the deployment of each high-risk system to ensure it has not caused any algorithmic discrimination.
  • Informing consumers about specified details if the high-risk system makes a significant decision about them.
  • Providing consumers opportunities to rectify any incorrect personal data that a high-risk system has processed while making a significant unfavourable decision about them and providing consumers opportunities to appeal, via human review if possible, any adverse decisions made about them arising from the deployment of a high-risk system.
  • Issuing a public statement outlining the types of high-risk systems currently deployed by the deployer, how deployers manage known or reasonably foreseeable risks of algorithmic discrimination associated with each system, and the nature, source, and extent of the information collected and utilized by the deployer.
  • Disclose to the attorney general any discovery of algorithmic discrimination that the high-risk system has caused or is likely to cause within 90 days after it is found.

Who do Colorado’s AI Consumer Protections apply to?

According to the text, this law applies to any person who does business in the state of Colorado, including a deployer or other developer that utilises or makes available an AI system that is intended to interact with consumers. Any applicable person must ensure that the AI system discloses to each consumer who interacts with it that the consumer is interacting with an AI system.

The bill, however, does not restrict certain abilities of a developer or deployer to engage in specified activities including:

  • Complying with federal, state, or municipal laws, ordinances, or regulations
  • Cooperating with and conducting specified investigations
  • Taking immediate action to protect interests that are essential for the life or physical safety of a consumer
  • Conducting and engaging in specified research activities

How will Colorado’s Consumer Protections for AI be enforced?

Enforcement of the bill lies solely in the power of the Attorney General. Developers and deployers of such high-risk systems have until 1 February 2026 to come into compliance, after which they have to make disclosures to the Attorney General about risks of algorithmic discrimination within 90 days of their discovery, and enforcement action can be taken.

Between 1 July 2025 and 30 June 2026, the attorney general must, prior to initiating any enforcement action, issue a notice of violation to the alleged violator and allow 60 days for any rectifications to be made.

The bill provides defences for developers or deployers if:

  • Those developing or deploying high-risk systems involved in a potential violation are in compliance with a nationally or internationally recognized AI risk management framework designated by the bill or the attorney general.
  • The developer or deployer takes specified measures to discover violations of the bill.

The law has not specified any penalties for violations as of yet.

Maximize compliance with Holistic AI

The specific provisions for preventing algorithmic discrimination in this bill highlight the necessity for developers and deployers of high-risk AI systems to ensure their systems are continuously monitored and evaluated.

Schedule a demo to find out how Holistic AI’s Governance Platform can help you maximise legal compliance by providing a 360° evaluation of your company’s AI systems to ensure responsible development and deployment.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo