🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

10 Things You Need to Know about Colorado’s SB205 Consumer Protections for Artificial Intelligence

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Nikitha Anand
Policy Analyst at Holistic AI
Published on
May 20, 2024
read time
0
min read
share this
10 Things You Need to Know about Colorado’s SB205 Consumer Protections for Artificial Intelligence

On 17 May 2024, Colorado’s Governor Jared Polis signed SB24-205 into law. First introduced on 10 April 2024 and passed by Colorado’s General Assembly on 8 May 2024, the law introduces consumer protections for artificial intelligence (AI), coming into effect on 1 February 2026. Having gone through the legislative process particularly quickly, Governor Polis’ signing statement shares some reservations about the law, with the US Chamber of Commerce previously writing to Polis calling for a veto of the law due to concerns about a lack of an adequate assessment of its impact on businesses and consumers. In this blog post, we outline the key things you need to know about Colorado’s SB-205.

1. How does SB-205 provide consumer protections for AI?

Colorado SB-205 aims to protect consumers from algorithmic discrimination resulting from AI systems used to make consequential decisions by imposing requirements on both developers and deployers of these systems in Colorado. In particular, the law requires developers and deployers to demonstrate reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination through a series of transparency, governance, and mitigation measures.

2. What is algorithmic discrimination under Colorado SB-205?

According to SB-205, algorithmic discrimination is:

“any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”

Seeking to protect a range of characteristics that are already protected by equal opportunity laws, SB-205 explicates that algorithmic discrimination does not include high-risk AI systems that are used solely for self-testing to identify, mitigate, or prevent discrimination or for expanding an applicant or customer database to increase diversity of redress historical discrimination. Moreover, the use of high-risk AI systems on behalf of private entities, as outlined in Title II of the Civil Rights Act of 1964, does not constitute algorithmic discrimination.

3. What is an artificial intelligence system according to Colorado SB-205?

Colorado’s consumer protections for AI define an AI system as:

“any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

This definition shares commonalities with many other definitions of AI and AI systems, including that it can be used to generate a range of outputs and that it relies on a machine-based system.

Moreover, a high-risk AI system is defined by SB-205 as an AI system that makes or is a substantial factor in making a consequential decision, where a substantial factor is a factor that (i) assists in making a consequential decision; (ii) is capable of altering the outcome of a consequential decision; and (iii) is generated by an artificial intelligence system.

High-risk AI systems do not include systems used to perform a narrow procedural task or detect decision-making patterns if it is not used to replace or influence previous human decisions. Furthermore, unless used to make or are a substantial factor in consequential decisions, the following systems are not considered high-risk:

  • Anti-fraud systems that do not use facial recognition
  • Anti-malware, anti-virus tools, and firewalls
  • AI-enabled video games
  • Calculators
  • Cybersecurity tools
  • Databases and data storage tools
  • Internet domain registration tools and internet website loading tools
  • Networking
  • Spam and robocall filters
  • Spell checkers
  • Spreadsheets
  • Web caching and web hosting
  • Technology for communicating with consumers to provide information, make referrals or recommendations, and answer questions if it is subject to an accepted use policy that prohibits discriminatory or harmful content.

4. What is a consequential decision under Colorado’s Consumer Protections for AI?

Introduced in the definition of high-risk AI systems, Colorado’s SB-205 defines a consequential decision as one that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:

  • Education
  • Employment
  • Financial services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance
  • Legal services

These categories converge with applications of AI considered high-risk under other horizontal AI legislation around the world.

5. What requirements does Colorado’s SB-205 impose on developers?

As part of their requirements to demonstrate reasonable care to protect against algorithmic discrimination, developers must:

  • Take reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination.
  • Provide deployers with documentation on:
    • Reasonably foreseeable uses and risks of the system.
    • Data used to train the system.
    • Intended uses, purposes, and outputs of the system.
    • Evaluation, risk mitigation, and data governance measures taken.
  • Make available a public statement that includes:
    • The types of high-risk systems a developer makes available.
    • Management of known and foreseeable risks while developing a system.
  • Disclose to the attorney general and all other known developers and deployers of a high-risk system any known or reasonably foreseeable risks within 90 days of its discovery.

6. How are deployers required to prevent algorithmic discrimination under Colorado’s SB205?

As part of their requirements to demonstrate reasonable care to protect against algorithmic discrimination, deployers must:

  • Implement a risk management policy that specifies the principles, processes, and personnel that a deployer uses to identify, document, and mitigate any known or foreseeable risks.
    • This policy must consider the AI Risk Management Framework (AI RMF) developed by the National Institute of Standards and Technology (NIST), standard ISO/IEC 42001 of the International Organization of Standardization, or any other recognized risk management framework for AI.
  • Complete an impact assessment (annually and within 90 days of making an intentional and substantial modification to the high-risk system) specifying the purpose, intended use, known or reasonably foreseeable risks of algorithmic discrimination, categories of data used and produced, performance evaluation metrics, transparency measures, and post-deployment monitoring of the system.
    • Impact assessments must be retained for at least three years after deployment.
  • Notify consumers pre-decision that a high-risk system has been deployed to make the relevant critical decision. Consumers must be provided a statement on the purpose of the system, the nature of the consequential decision, sources of personal data processed, information on their right to opt out of the processing of personal data for purposes of profiling, and other details.
  • Disclose to the attorney general any discovery of algorithmic discrimination within 90 days of its discovery.

7. How will Colorado enforce its AI consumer protections?

SB-205 will be exclusively enforced by Colorado’s Attorney General, where there will be a rebuttal presumption that reasonable care has been exercised if the developer/deployer has complied with SB-205 and any additional enforcement rules.

While no specific penalties have been outlined, a violation of the requirements of SB-205 constitutes an unfair trade practice. Where action is commenced by the Attorney General, it is an affirmative defense if the developer or deployer discovers and cures violations as a result of feedback, adversarial testing or red teaming, or internal reviews. Compliance with the AI RMF or ISO/IEC 42001, another nationally or internationally recognized risk management framework with equivalent or more stringent requirements, or any framework designated by the Attorney General also provides an affirmative defense.

8. What aspects of the SB205 are yet to be prescribed?

While the broad requirements of Colorado’s AI consumer protections have been outlined by the legal text, Colorado’s Attorney General is required to prescribe in greater detail some remaining elements. These include:

  • The form and manner for developers of high-risk AI systems to provide a disclosure of the known or reasonably foreseeable risks of algorithmic discrimination that could result from the intended use of the high-risk AI system.
  • The form and manner for developers to provide any requested information on its statement on the AI system’s intended usage and limitations of the covered AI system and documentation on data and discrimination measurement and mitigation procedures.
  • The form and manner for disclosure of the discovery that a high-risk AI system that has resulted in algorithmic discrimination.
  • The form and manner for providing the developer’s risk management policy, impact assessment completed internally or externally, and maintenance of required records.

Furthermore, the Attorney General is empowered to promulgate additional rules necessary to support the enforcement of SB-205, including:

  • Documentation and requirements for developers.
  • The contents of and requirements for the notices and disclosures and risk. management policy and program.
  • The content and requirements of the impact assessments.
  • The requirements for the rebuttable presumptions.
  • Requirements for the affirmative defense, including how other nationally or internationally recognized risk management frameworks for artificial intelligence systems will be acknowledged.

9. Are there any exemptions to Colorado’s SB205?

SB-205 applies to any developer or deployer of a high-risk AI system that does business in the state of Colorado, where a developer is an entity that develops or substantially modifies an AI system.

However, the obligations do not apply to developers or deployers of high-risk systems that have been approved or similar by a federal agency, such as the FDA or FAA, or that are used in compliance with standards established by a federal agency, including by the Federal Office of the National Coordinator for Health Information Technology is such standards are substantially equivalent or more stringent than the requirements of SB-205.

Moreover, the obligations do not apply to developers or deployers conducting research to support an application for approval or certification from a federal agency or performing work under a contract with the US Department of Commerce, Department of Defense, or NASA unless the high-risk AI system is used to make, or is a substantial factor in making, decisions about employment or housing. Furthermore, AI systems acquired by or for the federal government or a federal agency/department or similar unless used for decisions about housing or employment.

There are also some specific entities that are exempt from SB-205’s provisions:

  • Developers or deployers that are a covered entity under the Health Insurance Portability and Accountability Act of 1996 if providing health-care recommendations generated by AI that require a healthcare provider to take action to implement the recommendations, which are not considered to be high-risk.
  • An insurer or fraternal benefit society.
  • A bank, out-of-state bank, credit union chartered by the state of Colorado, federal credit union, out-of-state credit union.

10. When will Colorado’s AI consumer protections come into effect?

Notwithstanding the outstanding provisions to be developed by the Attorney General, the SB-205 starts to apply from 1 February 2026, giving organizations less than two years to become compliant.

Get compliant with Holistic AI

With the AI regulatory ecosystem rapidly evolving, compliance is not something that can happen overnight, particularly when there are multiple frameworks and jurisdictional differences to navigate.

Schedule a demo with our experts to find out how Holistic AI can help you prioritize your AI Governance.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo