Federal AI Regulation in Post-Chevron America
“Chevron is overruled,” reads the majority opinion authored by Chief Justice John Roberts in Loper Bright Enterprises v. Raimondo, decided by the U.S. Supreme Court on June 28, 2024. As a result of this 6-3 ruling, introducing federal requirements for AI use will likely become more difficult.
What is the Chevron doctrine and why was it overruled?
The Chevron doctrine required federal courts to defer to “reasonable” agency interpretations of laws that were silent or ambiguous on specific issues. It was established by the Court’s 1984 ruling in Chevron U.S.A. Inc. v. Natural Resources Defense Council.
In Loper Bright, the Court determined that the Chevron doctrine did not align with Section 706 of the Administrative Procedure Act (APA), which requires Courts to decide “all relevant questions of law” that are presented in relation to an agency’s action.
So, federal courts are no longer required to defer to the federal agency interpretations of statutes that are ambiguous or silent on specific issues.
Will the end of the Chevron doctrine result in more litigation?
It is not clear what the effects of the end of the Chevron doctrine will be on agency interpretations of existing laws.
Courts can still consider agency interpretations of statutes to the extent that they have the “power to persuade,” based on a variety of factors laid out in the Skidmore v. Swift & Co in 1944. However, this “persuasive”standard is much higher than the “reasonable” standard under the Chevron doctrine.
Previous cases that relied on Chevron deference still stand and will not be re-litigated. However, there are many aspects of existing agency interpretations of existing federal statutes that have not been challenged in courts yet – and are now open to challenge.
How will this affect federal AI regulation?
Since AI is a new regulatory field, it is likely to be significantly affected by the end of the Chevron doctrine. Federal AI rules are likely to stem from three sources, all of which will be affected by Loper Bright:
Agency interpretations of statutes that specifically and broadly cover AI systems: There is currently no federal law that specifically and broadly cover AI systems. If in the future, legislators were to pass such a law, it would almost certainly be ambiguous or silent on many key issues, given the technical and rapidly changing nature of AI systems. Agency interpretations of these key issues are likely to be challenged in courts. Courts will address “all relevant questions of law” and will not have to defer to reasonable agency interpretations, unless federal AI laws give federal agencies authorities to interpret specific issues, and those authorities are limited in scope and align with the APA.
Agency applications of existing statutes to AI systems: Agencies, such as the Federal Trade Commission, are proactively interpreting existing laws as they relate to AI systems and related data. Now, if their interpretations expand beyond their plain meanings in statutory text, they are likely to face court challenges.
Trustworthy AI Executive Order: Many elements of the Biden administration’s wide-ranging Trustworthy AI Executive Order (EO) of October 2023 are unlikely to be affected by the end of the Chevron doctrine, both because the EO focused on administrative steps rather than statutory interpretation and because it put forth an aggressive timeline for these steps to be taken.However, many anticipated follow-on effects of the EO – such as federal agencies using EO concepts when issuing guidance applying existing laws to AI systems – will be significantly curtailed by the end of the Chevron doctrine.
How will this affect other sources of AI guidance?
Given the complications in putting forward federal laws, state laws may become more important as sources of guidance on AI governance. States laws and state agencies are not affected by the end of the Chevron doctrine.
Additionally, “soft law” instruments that do not have the force of law or regulation may become increasingly important sources of guidance for organizations. These include non-regulatory frameworks like the NIST AI Risk Management Framework, international standards like ISO/IEC 42001, and more granular soft law instruments that relate to specific industries or kinds of AI.
Enzai is here to help
Enzai’s product can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.
Build and deploy AI with confidence
Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.