Colorado becomes First State to Broadly Regulate AI Systems
On May 17, 2024, Colorado Governor Jared Polis signed into law the Colorado Artificial Intelligence Act (“CAIA”). The CAIA is a significant milestone for US AI regulation because it is the first state law to cover a broad range of AI systems. Though a similar bill proposed in Connecticut did not become law, it is likely that other states will follow Colorado’s lead in broadly regulating AI use.
How does the CAIA compare to the EU AI Act?
The CAIA is similar to the EU AI Act in three ways.
First, the majority of requirements in both laws pertain to high-risk AI systems. Second, alignment with both laws can be shown, to a significant degree, by conforming to standards such as ISO/IEC 42001 - AI Management Systems. Third, both laws require Impact Assessments for high-risk systems.
There are also at least two major differences between the CAIA and the EU AI Act.
First, the EU AI Act’s requirements are deeper than those of the CAIA. The CAIA, primarily concerned with algorithmic discrimination, carves out several common kinds of AI systems from its definition of high-risk AI system. It also exempts deployers with fewer than 50 employees from many of its requirements, if they meet certain criteria.
Second, most of its provisions will come into effect much sooner than the bulk of EU AI Act provisions. Companies have just under 20 months to become compliant with the CAIA, as it goes into effect on February 1, 2026. In comparison, the EU AI Act’s requirements for high-risk systems will not come into force until mid-2026, 24 months after its publication in the EU journal.
Which AI systems are covered by the CAIA?
The CAIA applies to developers or deployers of AI systems doing business in Colorado. A developer is someone who develops or intentionally and substantially modifies an AI system, while a deployer is someone who deploys a high-risk AI system.
According to the CAIA, an AI system is high-risk if, when deployed, it “makes, or is a substantial factor in making, a consequential decision.” A consequential decision is a decision that has “a material legal or similarly significant effect on the provision or denial to any consumer of, or to the cost or terms of” educational enrollment or opportunity, employment or an employment opportunity, a financial or lending service, an essential government service, health care services, housing, insurance, or a legal service. Additionally, some kinds of AI systems are carved out of the high-risk category.
How will the CAIA be enforced?
The Colorado Attorney General will have the exclusive authority to enforce and to provide rules and guidance related to it. The CAIA does not provide a private right of action. If the Colorado Attorney General commences an enforcement action, a deployer will have a “rebuttable presumption” that it used reasonable care to protect consumers from algorithmic discrimination if it has a risk management policy and program that aligns with the NIST AI Risk Management Framework (“AI RMF”) or ISO/IEC 42001. Additionally, in an enforcement action, a company has an “affirmative defense” if it discovers and cures a CAIA violation due to feedback, adversarial testing or red teaming (as those terms are defined by NIST) or an internal review process and “is otherwise in compliance with” the NIST AI RMF and ISO/IEC 42001.
For more information on how to develop a risk management policy and program, request our free and comprehensive guide on how you can draft an AI policy here.
What does the CAIA require from developers and deployers of high-risk AI systems?
The CAIA requires developers and deployers to use “reasonable care” to protect individuals from known or foreseeable risks of algorithmic discrimination and to describe their high-risk AI uses on their websites.
Developers are required to notify the Colorado Attorney General and known deployers of any known or reasonably foreseeable risks of algorithmic discrimination without unreasonable delay and within 90 days. Deployers are required to notify the Colorado Attorney General once they learn that a system has caused algorithmic discrimination without unreasonable delay and within 90 days.
Developers are required to provide “a general statement describing reasonably foreseeable uses and known harmful or inappropriate uses of the system” along with related documentation.
Deployers are required to conduct impact assessments of high-risk AI systems annually and within 90 days of any “intentional and substantial modification” of the AI system.
Deployers are required to have in place a risk management policy and program, to give notice to individuals before an AI system is used to make a consequential decision, to explain to individuals how adverse consequential decisions were made, to provide an opportunity to correct incorrect personal information related to such decisions and to provide opportunities to appeal such decisions.
All AI Systems must be ‘Labeled’
While most of the CAIA’s requirements pertain to high-risk AI systems, its labeling requirement applies to all AI systems intended to interact with individuals. For such systems, a developer or deployer must disclose to an individual that the individual is interacting with an AI system, unless this would be “obvious to a reasonable person.”
Enzai is here to help
Enzai’s product can help your company comply with Colorado requirements, the NIST AI RMF, ISO/IEC 42001, the EU AI Act and other global regulatory and assurance regimes. To learn more, get in touch here.
Build and deploy AI with confidence
Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.