On 8 May 2024, the American state of Colorado passed SB24-205, enacting protections for consumers in their interactions with AI systems. The bill was first introduced on 10 April of this year, making this a remarkably quick passage of the law. The law represents the rapid escalation of efforts, both in the country and globally, to tackle AI risks and harms by mandating regulation of AI and  comes on the heels of several recently released pieces of guidance and proposed laws from the US federal government for sweeping legislation of AI. The passing of SB205 also builds on Colorado’s existing progress towards AI regulation with SB21-169, which protects  consumers from unfair discrimination resulting from the use of external customer data and  algorithms in insurance, passed mid-2023.
SB205 focuses specifically on the regulation of high-risk AI systems and mandates that developers and deployers of such systems must take reasonable precautions to prevent algorithmic discrimination within those systems. Here, algorithmic discrimination is defined as unlawful differential treatment or impact that disfavours individuals or groups based on protected attributes, while high-risk systems are those used to make consequential decisions relating to education, employment, financial services, government services, healthcare, housing, insurance, or legal services.
It can be assumed that a developer has exercised reasonable care if they adhere to specific provisions outlined in the bill including:
Similarly to developers, it can be assumed that a deployer has used reasonable care if they have complied with specified provisions in the bill including:
According to the text, this law applies to any person who does business in the state of Colorado, including a deployer or other developer that utilises or makes available an AI system that is intended to interact with consumers. Any applicable person must ensure that the AI system discloses to each consumer who interacts with it that the consumer is interacting with an AI system.
The bill, however, does not restrict certain abilities of a developer or deployer to engage in specified activities including:
Enforcement of the bill lies solely in the power of the Attorney General. Developers and deployers of such high-risk systems have until 1 February 2026 to come into compliance, after which they have to make disclosures to the Attorney General about risks of algorithmic discrimination within 90 days of their discovery, and enforcement action can be taken.
Between 1 July 2025 and 30 June 2026, the attorney general must, prior to initiating any enforcement action, issue a notice of violation to the alleged violator and allow 60 days for any rectifications to be made.
The bill provides defences for developers or deployers if:
The law has not specified any penalties for violations as of yet.
The specific provisions for preventing algorithmic discrimination in this bill highlight the necessity for developers and deployers of high-risk AI systems to ensure their systems are continuously monitored and evaluated.
Schedule a demo to find out how Holistic AI’s Governance Platform can help you maximise legal compliance by providing a 360° evaluation of your company’s AI systems to ensure responsible development and deployment.
DISCLAIMER: This news article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts