On 17 May 2024, Colorado’s Governor Jared Polis signed SB24-205 into law. First introduced on 10 April 2024 and passed by Colorado’s General Assembly on 8 May 2024, the law introduces consumer protections for artificial intelligence (AI), coming into effect on 1 February 2026. Having gone through the legislative process particularly quickly, Governor Polis’ signing statement shares some reservations about the law, with the US Chamber of Commerce previously writing to Polis calling for a veto of the law due to concerns about a lack of an adequate assessment of its impact on businesses and consumers. In this blog post, we outline the key things you need to know about Colorado’s SB-205.
Colorado SB-205 aims to protect consumers from algorithmic discrimination resulting from AI systems used to make consequential decisions by imposing requirements on both developers and deployers of these systems in Colorado. In particular, the law requires developers and deployers to demonstrate reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination through a series of transparency, governance, and mitigation measures.
According to SB-205, algorithmic discrimination is:
“any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”
Seeking to protect a range of characteristics that are already protected by equal opportunity laws, SB-205 explicates that algorithmic discrimination does not include high-risk AI systems that are used solely for self-testing to identify, mitigate, or prevent discrimination or for expanding an applicant or customer database to increase diversity of redress historical discrimination. Moreover, the use of high-risk AI systems on behalf of private entities, as outlined in Title II of the Civil Rights Act of 1964, does not constitute algorithmic discrimination.
Colorado’s consumer protections for AI define an AI system as:
“any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
This definition shares commonalities with many other definitions of AI and AI systems, including that it can be used to generate a range of outputs and that it relies on a machine-based system.
Moreover, a high-risk AI system is defined by SB-205 as an AI system that makes or is a substantial factor in making a consequential decision, where a substantial factor is a factor that (i) assists in making a consequential decision; (ii) is capable of altering the outcome of a consequential decision; and (iii) is generated by an artificial intelligence system.
High-risk AI systems do not include systems used to perform a narrow procedural task or detect decision-making patterns if it is not used to replace or influence previous human decisions. Furthermore, unless used to make or are a substantial factor in consequential decisions, the following systems are not considered high-risk:
Introduced in the definition of high-risk AI systems, Colorado’s SB-205 defines a consequential decision as one that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:
These categories converge with applications of AI considered high-risk under other horizontal AI legislation around the world.
As part of their requirements to demonstrate reasonable care to protect against algorithmic discrimination, developers must:
As part of their requirements to demonstrate reasonable care to protect against algorithmic discrimination, deployers must:
SB-205 will be exclusively enforced by Colorado’s Attorney General, where there will be a rebuttal presumption that reasonable care has been exercised if the developer/deployer has complied with SB-205 and any additional enforcement rules.
While no specific penalties have been outlined, a violation of the requirements of SB-205 constitutes an unfair trade practice. Where action is commenced by the Attorney General, it is an affirmative defense if the developer or deployer discovers and cures violations as a result of feedback, adversarial testing or red teaming, or internal reviews. Compliance with the AI RMF or ISO/IEC 42001, another nationally or internationally recognized risk management framework with equivalent or more stringent requirements, or any framework designated by the Attorney General also provides an affirmative defense.
While the broad requirements of Colorado’s AI consumer protections have been outlined by the legal text, Colorado’s Attorney General is required to prescribe in greater detail some remaining elements. These include:
Furthermore, the Attorney General is empowered to promulgate additional rules necessary to support the enforcement of SB-205, including:
SB-205 applies to any developer or deployer of a high-risk AI system that does business in the state of Colorado, where a developer is an entity that develops or substantially modifies an AI system.
However, the obligations do not apply to developers or deployers of high-risk systems that have been approved or similar by a federal agency, such as the FDA or FAA, or that are used in compliance with standards established by a federal agency, including by the Federal Office of the National Coordinator for Health Information Technology is such standards are substantially equivalent or more stringent than the requirements of SB-205.
Moreover, the obligations do not apply to developers or deployers conducting research to support an application for approval or certification from a federal agency or performing work under a contract with the US Department of Commerce, Department of Defense, or NASA unless the high-risk AI system is used to make, or is a substantial factor in making, decisions about employment or housing. Furthermore, AI systems acquired by or for the federal government or a federal agency/department or similar unless used for decisions about housing or employment.
There are also some specific entities that are exempt from SB-205’s provisions:
Notwithstanding the outstanding provisions to be developed by the Attorney General, the SB-205 starts to apply from 1 February 2026, giving organizations less than two years to become compliant.
With the AI regulatory ecosystem rapidly evolving, compliance is not something that can happen overnight, particularly when there are multiple frameworks and jurisdictional differences to navigate.
Schedule a demo with our experts to find out how Holistic AI can help you prioritize your AI Governance.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts