Congress Could Block State AI Laws: What You Need to Know

Federal Proposal to Limit State AI Regulation Advances
A proposed federal measure that could prevent state and local governments from regulating artificial intelligence (AI) for as long as ten years is nearing potential enactment. Senator Ted Cruz (R-TX) and other legislators are actively working to incorporate it into a substantial Republican legislative package currently under consideration by the Senate, with a vote scheduled for Monday and a crucial deadline approaching on July 4th.
Support for the Proposal
Proponents, including prominent figures like Sam Altman of OpenAI, Palmer Luckey from Anduril, and Marc Andreessen of a16z, contend that a fragmented regulatory landscape across different states would hinder American innovation, particularly as competition with China intensifies.
Criticism and Concerns
However, the proposal faces significant opposition. Critics, encompassing most Democrats, numerous Republicans, Dario Amodei, CEO of Anthropic, labor organizations, AI safety advocacy groups, and consumer protection advocates, express concerns that this provision would obstruct states' ability to enact laws safeguarding consumers against potential harms arising from AI technologies.
They also argue it would effectively grant substantial AI firms operational freedom with limited oversight or accountability.
Governors' Opposition
Seventeen Republican governors voiced their dissent in a letter to Senate Majority Leader John Thune, a proponent of a “light touch” approach to AI regulation, and House Speaker Mike Johnson. They requested the removal of the so-called “AI moratorium” from the budget reconciliation bill, as reported by Axios.
Evolution of the Provision
Initially introduced in May as part of the “Big Beautiful Bill,” the provision aimed to prohibit states from enforcing any laws or regulations concerning AI models, systems, or automated decision-making processes for a decade.
Over the weekend, Senator Cruz and Marsha Blackburn (R-TN) reached an agreement to reduce the duration of the pause on state-level AI regulation to five years. The revised language also seeks to exempt laws addressing child sexual abuse material, children’s online safety, and the protection of an individual’s rights related to their name, likeness, voice, and image.
However, the amendment stipulates that such laws must not impose an “undue or disproportionate burden” on AI systems, a clause that legal experts believe could significantly impact the effectiveness of existing state AI laws.
Potential Impact on Existing Laws
This measure could potentially invalidate state AI laws already in effect, such as California’s AB 2013, which mandates companies to disclose the data utilized in training AI systems, and Tennessee’s ELVIS Act, designed to protect musicians and creators from AI-generated impersonations.
Broader Implications
The moratorium’s scope extends beyond these specific examples. Public Citizen has compiled a database of AI-related legislation that could be affected, revealing considerable overlap among state laws, potentially simplifying compliance for AI companies.
For instance, states including Alabama, Arizona, California, Delaware, Hawaii, Indiana, Montana, and Texas have enacted laws criminalizing or establishing civil liability for the dissemination of deceptive AI-generated media intended to influence elections.
Threat to AI Safety Bills
The AI moratorium also jeopardizes several pending AI safety bills, including New York’s RAISE Act, which would require large AI laboratories nationwide to publish comprehensive safety reports.
Funding Leverage
Securing the moratorium’s inclusion in the budget bill necessitated strategic maneuvering. Senator Cruz initially linked compliance with the AI moratorium to states’ eligibility for funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program, citing the need for a direct fiscal impact as required for budget bill provisions.
A subsequent revision proposed by Senator Cruz last week limited the requirement to the new $500 million in BEAD funding included in the bill. However, a closer examination of the revised text reveals that it also threatens to withdraw previously allocated broadband funding from non-compliant states.
Senator Maria Cantwell (D-WA) previously criticized Senator Cruz’s language, asserting that it “forces states receiving BEAD funding to choose between expanding broadband or protecting consumers from AI harms for ten years.”
Future Developments in AI Regulation
Currently, the Senate is conducting a vote-a-rama, involving a rapid succession of votes on amendments to the budget bill. The agreement reached between Senators Cruz and Blackburn will be incorporated into a larger amendment. This amendment is anticipated to pass with a party-line vote from Republicans. Furthermore, senators are expected to vote on a Democratic amendment aimed at removing the entire section altogether, according to sources who have spoken with TechCrunch.
Industry Concerns Regarding Fragmented Regulation
Chris Lehane, serving as the chief global affairs officer at OpenAI, expressed his view on LinkedIn that the present, fragmented approach to AI regulation is ineffective and will likely worsen without a change in course. He emphasized that this situation carries “serious implications” for the United States in its competition with China for leadership in artificial intelligence.
Lehane referenced a statement made by Vladimir Putin, noting that the nation that achieves dominance in AI will shape the future global landscape.
OpenAI’s Position on National Standards
Sam Altman, CEO of OpenAI, echoed these concerns during a recent appearance on the Hard Fork podcast. While acknowledging the potential benefits of adaptive regulation addressing the most significant risks posed by AI, he suggested that a fragmented regulatory environment across individual states would create considerable difficulties in offering services.
Altman also raised questions about the capacity of policymakers to effectively regulate a technology that is evolving at such a rapid pace.
He voiced concerns that a lengthy, detailed regulatory process could quickly become outdated as the technology advances.
A Review of Existing State Legislation
However, a detailed examination of current state laws reveals a different perspective. The majority of existing state AI laws are not overly broad in scope. Instead, they primarily focus on protecting consumers and individuals from specific potential harms.
These harms include issues such as deepfakes, fraudulent activities, discriminatory practices, and violations of privacy. The laws target the application of AI in areas like employment, housing, credit scoring, healthcare provision, and electoral processes. They often include requirements for transparency and safeguards against algorithmic bias.
Inquiries to Leading Tech Companies
TechCrunch has reached out to OpenAI, requesting specific examples of current state laws that have impeded their technological progress or the release of new models. They also inquired as to the reasons why compliance with differing state regulations would be considered overly complex, given OpenAI’s advancements in technologies capable of automating numerous white-collar jobs.
Similar questions were posed to Meta, Google, Amazon, and Apple, but responses have not yet been received.
- Key Concern: A patchwork of state laws could hinder AI development.
- Industry Argument: National standards are needed for effective regulation.
- State Laws Focus: Existing laws primarily address consumer protection.
Arguments Against Federal Preemption of State AI Laws
Emily Peterson-Cassin, corporate power director at Demand Progress, articulated to TechCrunch that the claim of regulatory complexity is a long-standing argument against consumer protection. However, she emphasized that large corporations routinely manage compliance with varying state regulations.
Critics contend that the push for an AI moratorium isn't driven by a desire to foster innovation, but rather to circumvent regulatory scrutiny. Despite numerous states enacting AI-related legislation, Congress has yet to pass any federal laws governing artificial intelligence.
Nathan Calvin, VP of state affairs at Encode, stated that he would welcome robust federal AI safety legislation that preempted state laws. However, he believes the proposed moratorium eliminates any incentive for AI companies to engage in negotiations.
Dario Amodei, CEO of Anthropic, voiced strong opposition, characterizing a 10-year moratorium as an excessively broad measure. He believes AI is evolving at an unprecedented rate.
Amodei explained in a New York Times opinion piece that significant changes driven by AI could materialize within two years, rendering predictions for a decade from now unreliable. He asserts that without a defined federal strategy, a moratorium would prevent both state-level action and the establishment of a national safety net.
Instead of dictating product release strategies, Amodei advocates for government collaboration with AI developers to establish standards for transparency regarding their practices and model capabilities.
Resistance to the moratorium extends beyond the Democratic party. Several Republicans have expressed concerns, citing the provision's conflict with the traditional Republican emphasis on states’ rights.
Senator Josh Hawley (R-MO) is actively collaborating with Democrats to remove the preemption clause from the bill, citing states’ rights concerns. Marsha Blackburn also criticized the provision, asserting the necessity for states to safeguard their residents and creative sectors from potential AI-related damages.
Representative Marjorie Taylor Greene (R-GA) declared her opposition to the entire budget if the moratorium remains included, demonstrating the depth of Republican resistance.
Public Opinion on Artificial Intelligence Regulation
Several Republican figures, including Senator Cruz and Senate Majority Leader John Thune, have expressed a preference for minimal intervention in the governance of AI. Senator Cruz articulated this sentiment by stating that all U.S. citizens should have an opportunity to influence the development of this technology.
Conversely, recent findings from a Pew Research Center survey indicate a prevailing desire among Americans for increased oversight of AI. The survey revealed that approximately 60% of U.S. adults, and 56% of experts in the field, are more apprehensive about insufficient government regulation than about overregulation.
A significant portion of the population also exhibits a lack of confidence in the government’s ability to effectively regulate AI. Furthermore, skepticism exists regarding the sincerity of industry initiatives focused on responsible AI development.
It is important to note that this article has been updated on June 30th to incorporate revisions to the proposed legislation, the latest information regarding the Senate’s voting schedule, and newly expressed Republican resistance to a potential moratorium on AI.
Related Posts

Disney Cease and Desist: Google Faces Copyright Infringement Claim

OpenAI Responds to Google with GPT-5.2 After 'Code Red' Memo

Waymo Baby Delivery: Birth in Self-Driving Car

Google AI Leadership: Promoting Data Center Tech Expert
