Texas considers Comprehensive AI Regulation
Texas State Representative Giovanni Capriglione introduced the Texas Responsible AI Governance Act (TRAIGA) as HB 1709 in early January. Like the Colorado AI Act, TRAIGA uses a risk-based framework that would apply across industries. However, if enacted, TRAIGA would be more significant than Colorado’s law, both due to the size of the Texas economy and due to TRAIGA’s requirements being more rigorous.
Risk-based approach to prevent discrimination
Like the Colorado law, TRAIGA’s primary objective is to require AI users to exercise “reasonable care” to prevent discrimination based on protected class.
TRAIGA would ban several AI use cases, including the manipulation of human behavior, social scoring, generating certain kinds of harmful content and capturing specified biometric markers, among others.
For AI systems deemed high-risk, TRAIGA would impose requirements upon developers, deployers and distributors.
High-risk system is defined as a “substantial factor” in a “consequential decision,” though several use cases are carved out from this definition.
Substantial factor is defined as a) “considered when making a consequential decision,” b) “likely to alter the outcome of a consequential decision,” and c) “weighed more heavily than any other factor contributing to the consequential decision.”
Consequential decision is defined as having a “material, legal, or similarly significant, effect on a consumer’s access to, cost of, or terms or conditions” of a) a criminal case assessment, a sentencing or plea agreement analysis, or a pardon, parole, probation, or release decision, b) education enrollment or an education opportunity, c) employment or an employment opportunity, d) financial service, e) an essential government service, f) residential utility services, g) health-care service or treatment, h) housing, i) insurance, j) a legal service, k) a transportation service, l) constitutionally protected services or products, or m) elections or voting process.
Risk Management and Reporting
High-Risk Reports: Deployers must give deployers reports that outline how an AI system should be used, known limitations that could lead to discrimination and summaries of training data, among other elements.
Impact Assessments: Deployers must conduct these for high-risk systems annually and within 90 days of a substantial modification. However, “a single impact assessment may address a comparable set” of high-risk AI systems.
Management Policies: Developers and deployers must implement organizational policies to govern their AI development or deployment.
TRAIGA comes with fines of $50,000-$100,000 per violation.
Texas Artificial Intelligence Council
TRAIGA would create an AI council attached to the Office of the Governor. Among other responsibilities, the AI council would be tasked with ensuring that AI systems operate in the public’s best interest, identifying and suggesting reforms to laws that impede AI innovation and investigating and evaluating “potential instances of regulatory capture.” By giving such wide-ranging powers to the AI council in a jurisdiction as significant as Texas, TRAIGA is setting up the council to become one of the world’s most significant regulatory authorities for AI.
TRAIGA’s Future
TRAIGA’s proponents will paint it as a common sense bill with a familiar risk-based approach and supporters from different political backgrounds. Critics will characterize it as unfriendly to small businesses and open-source AI (despite certain carve-outs), anti-innovation and unnecessarily focused on technology rather than harm.
As the Texas legislature considers the bill’s implications, TRAIGA promises to become a central part of the AI policy conversation in 2025.
Enzai is here to help
Enzai’s AI GRC platform can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.
Build and deploy AI with confidence
Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.