5 minute read | September.12.2025
Colorado enacted a first-of-its-kind AI law, the Colorado AI Act (SB 24-205) governing the development and use of artificial intelligence.
Here are five things you should know about the Colorado AI Act in its current form—and how it may change before it takes effect.
While the AI Act was originally slated to go into effect in February 2026, Colorado has faced mounting pressure to change the law due to concerns of unintended impacts to consumers and businesses. Shortly after signing the bill into law, Colorado Gov. Jared Polis said in a letter that legislators plan to revise the law “to ensure the final regulatory framework will protect consumers and support Colorado’s leadership in the AI sector.”
Six months ahead of its planned effective date, the governor announced a special legislative session to address “the fiscal and implementation impact” of the law on consumers, businesses and the state and local government. While lawmakers were not able to agree on revisions to the law’s obligations during the August 2025 special session, they did amend the law’s implementation date, delaying it to June 30, 2026.
The Colorado legislature will reconvene in January 2026 and is expected to further consider substantive amendments to the Act.
The Act only applies to “high-risk artificial intelligence systems” or “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”
Despite several exceptions for systems that perform narrow procedural tasks or augment decision-making, these definitions can be interpreted broadly to apply to a wide range of technologies.
The governor’s letter makes clear that revisions to the Act will refine the definitions to ensure the Act governs only the most high-risk systems.
As a result, the Act in its final form may apply only to AI systems that truly impact decisions with a material legal or similarly significant effect on designated high-importance services.
The Act applies to anyone who does business in Colorado and develops or intentionally and substantially modifies a high-risk artificial intelligence system. It requires them to use reasonable care to protect consumers from algorithmic discrimination.
Developers must make documentation available to deployers or other developers of the system. The documentation must disclose, among other things:
In its current form, the Act requires developers to proactively inform the Colorado Attorney General and known deployers/developers of any algorithmic discrimination issues. The governor’s letter, however, indicates an intent to shift to a more traditional enforcement framework without mandatory proactive disclosures.
The Act also requires anyone who does business in Colorado and uses a high-risk artificial intelligence system to use reasonable care to protect consumers from algorithmic discrimination relating to such systems. Deployers must:
As passed, the Act would require deployers to proactively inform the Colorado Attorney General of any algorithmic discrimination. The governor’s letter, though, indicates that Colorado intends to shift to a more traditional enforcement framework without mandatory proactive disclosures.
In addition, the letter says legislators plan to amend the Act to focus regulation on the developers of high-risk artificial intelligence systems rather than smaller companies that deploy them. As a result, we may see scaled-back deployer obligations or broader deployer exemptions in the final implemented regulatory framework.
Developers and deployers must provide a public statement to consumers summarizing the types of high-risk artificial intelligence systems they develop or use, and how they mitigate algorithmic discrimination risks.
Deployers also must notify consumers when they use a high-risk artificial intelligence system to make a consequential system or when such a system is a substantial factor in making that decision. They must do this before the decision is made. They must also provide the consumer information about the decision and, where available, the right to opt-out.
If a high-risk artificial intelligence system results in an adverse decision for a consumer, the deployer must:
Lastly, the Act requires that any artificial intelligence system (whether high-risk or not) intended to interact with consumers be accompanied by a disclosure that the consumer is interacting with an artificial intelligence system.
While the final form of the Colorado Artificial Intelligence Act may further deviate from the version passed by the state legislature, businesses can start preparing for material AI regulation by:
If you have questions about this development, please reach out to the authors (Shannon Yavorsky, Nick Farnsworth, Matthew Coleman, Peter Graham), or other members of the Orrick team.
Want to view AI laws in another state? Our U.S. AI law tracker now features advanced search and filtering capabilities. Filter all 160+ state AI laws by state, effective date, or AI scope (healthcare, deepfakes, government use, etc.). Bookmark this page: All States