LOGO

the race to regulate ai has sparked a federal vs. state showdown

November 28, 2025
the race to regulate ai has sparked a federal vs. state showdown

The Emerging Regulatory Landscape for Artificial Intelligence

Washington is nearing a pivotal moment in determining the framework for regulating artificial intelligence. The central debate, however, isn't focused on the technology itself, but rather on the appropriate regulatory authority.

State-Level Initiatives and Concerns

With a comprehensive federal AI standard prioritizing consumer safety still absent, numerous states have proposed legislation to safeguard residents from potential harms linked to AI. Examples include California’s SB-53, an AI safety bill, and Texas’ Responsible AI Governance Act, which specifically addresses the intentional misuse of AI systems.

Companies within the technology sector, including established giants and emerging startups originating from Silicon Valley, contend that these varied state laws would establish an impractical and fragmented regulatory environment, potentially hindering innovation.

Industry Push for National Standards

Josh Vlasto, co-founder of the pro-AI political action committee Leading the Future, expressed concerns to TechCrunch, stating that these state regulations could impede the United States’ competitive standing against China in the field of AI.

The industry, alongside representatives now working within the White House, is advocating for either a unified national standard or the absence of regulation altogether. This has led to new proposals aimed at preventing states from independently enacting AI legislation.

Federal Efforts to Preempt State Laws

Reportedly, members of the House of Representatives are attempting to utilize the National Defense Authorization Act (NDAA) to invalidate state-level AI laws. Simultaneously, a draft of a White House executive order reveals substantial support for preempting state regulatory efforts concerning AI.

Such broad preemption of states’ rights to regulate AI faces opposition in Congress, having been decisively rejected in a similar vote earlier this year. Arguments against preemption center on the need for consumer protection in the absence of a federal standard, and the potential for unchecked operation by technology companies.

Developing a Federal Standard

Representative Ted Lieu (D-CA) and the bipartisan House AI Task Force are currently developing a series of federal AI bills. These bills aim to encompass a wide range of consumer protections, addressing issues such as fraud, healthcare applications, transparency, child safety, and the mitigation of catastrophic risks.

The creation of a comprehensive federal bill of this scope is anticipated to be a lengthy process, potentially spanning months or even years. This timeline underscores the urgency and contentious nature of the current efforts to curtail state authority in AI policy.

The Emerging Conflict: NDAA versus Executive Order Approaches to AI Regulation

Recent weeks have witnessed an escalation in attempts to curtail state-level regulation of artificial intelligence (AI).

House Action via the NDAA

Language aimed at preempting state AI regulations is currently under consideration for inclusion within the National Defense Authorization Act (NDAA). This development was reported by Punchbowl News, with House Majority Leader Steve Scalise (R-LA) confirming the discussions.

According to Politico, Congress aimed to reach an agreement on the defense bill prior to Thanksgiving. Sources indicate that negotiations are centered on potentially limiting the scope of preemption, possibly allowing states to retain authority in areas such as child safety and the provision of transparency.

White House Executive Order Strategy

A draft of a White House Executive Order (EO), which has since been reportedly paused, outlines a potential federal strategy for preempting state AI laws.

This EO proposes the establishment of an “AI Litigation Task Force” tasked with challenging state AI legislation in the courts. Furthermore, it directs federal agencies to assess state laws considered unduly burdensome.

Federal Standards and Agency Direction

The draft EO also seeks to encourage the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) to develop national standards for AI. These standards would effectively supersede any conflicting state-level regulations.

David Sacks' Role and Influence

A significant aspect of the proposed EO involves granting co-leadership authority for the creation of a unified legal framework to David Sacks.

Sacks, previously designated as Trump’s AI and cryptocurrency advisor and a co-founder of the venture capital firm Craft Ventures, would wield considerable influence over AI policy. This would potentially diminish the traditional role of the White House Office of Science and Technology Policy (OSTP) and its director, Michael Kratsios.

Advocacy for Limited Federal Oversight

Sacks has consistently voiced support for preventing state-level regulation of AI. He favors a minimal federal oversight approach, advocating instead for industry self-regulation as a means to “maximize growth.”

The Issue of Fragmented AI Regulation

The stance taken by Sacks aligns with the perspective held by a significant portion of the artificial intelligence sector. Numerous political action committees (PACs) advocating for AI have been established recently, investing substantial funds – amounting to hundreds of millions of dollars – in both local and state elections.

Their primary objective is to oppose candidates who champion the regulation of AI technologies.

Leading the Future, financially supported by Andreessen Horowitz, Greg Brockman (President of OpenAI), Perplexity, and Joe Lonsdale (co-founder of Palantir), has accumulated over $100 million in funding.

This week, Leading the Future initiated a $10 million campaign aimed at persuading Congress to establish a unified national AI policy that takes precedence over individual state laws.

Vlasto explained to TechCrunch that, “In the pursuit of technological innovation within the tech industry, it’s crucial to avoid a scenario where numerous laws continually emerge from individuals lacking specialized technical knowledge.”

He contends that a fragmented regulatory landscape across different states will “hinder our progress in competing with China.”

Nathan Leamer, the executive director of Build American AI – the advocacy branch of the PAC – affirmed the group’s support for federal preemption, even in the absence of specific federal consumer protections tailored to AI.

Leamer posits that current legal frameworks, such as those addressing fraud or product liability, are adequate for addressing potential harms caused by AI.

Unlike state laws that often aim to proactively prevent issues, Leamer advocates for a more responsive strategy: allowing companies to innovate rapidly and addressing any resulting problems through legal proceedings as they occur.

The Debate Over Federal vs. State AI Regulation

New York Assembly member Alex Bores, currently campaigning for a Congressional seat, has become a primary focus for the organization Leading the Future. His sponsorship of the RAISE Act, mandating safety protocols for substantial AI laboratories to mitigate significant risks, has drawn attention.

Bores expressed his belief in AI's potential to TechCrunch, emphasizing the necessity of sensible regulation. He posits that AI systems gaining market dominance will be those deemed trustworthy, and that the market often fails to adequately prioritize investment in safety measures.

State Agility in Addressing AI Risks

While Bores advocates for a unified national AI strategy, he contends that states are better positioned to respond swiftly to evolving dangers.

Indeed, states have demonstrated a greater capacity for rapid action.

By November 2025, a total of 38 states had enacted over 100 laws pertaining to AI, largely concentrating on issues such as deepfakes, transparency requirements, and the governmental application of AI technologies. (Recent research indicates that 69% of these laws do not impose any obligations on AI developers.)

Congressional Delays Compared to State Action

The comparatively slower pace of legislative progress in Congress further supports the argument for state-level responsiveness. While numerous AI-related bills have been proposed, few have been successfully enacted.

Representative Lieu, for instance, has submitted 67 bills to the House Science Committee since 2015, with only one ultimately becoming law.

Opposition to Federal Preemption

Over 200 legislators have signed an open letter protesting preemption within the National Defense Authorization Act (NDAA). They maintain that states function as “laboratories of democracies” and must preserve their ability to address emerging digital challenges with flexibility.

Furthermore, nearly 40 state attorneys general have jointly issued a letter opposing any prohibition on state-level AI regulation.

Arguments Against the "Patchwork" Complaint

Bruce Schneier, a cybersecurity specialist, and Nathan E. Sanders, a data scientist, authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, suggest that concerns regarding a fragmented regulatory landscape are exaggerated.

They point out that AI companies already adhere to stringent regulations within the European Union, and that most industries are capable of functioning effectively under diverse state laws.

According to Schneier and Sanders, the primary motivation behind the opposition to state regulation is a desire to evade accountability.

Potential Framework for a Federal Standard

Representative Ted Lieu is currently preparing a comprehensive legislative proposal exceeding 200 pages, anticipated for introduction in December. This extensive bill addresses multiple facets of artificial intelligence, including stipulations regarding fraudulent activities, safeguards against deepfakes, protections for whistleblowers, allocation of computational resources to academic institutions, and obligatory testing alongside disclosure requirements for prominent large language model companies.

A key element of this proposed legislation would mandate that AI laboratories conduct thorough testing of their models and publicly release the findings. Currently, such practices are largely voluntary within the industry.

Comparison to Existing Senate Proposals

While Lieu’s bill has not yet been formally presented, he clarifies that it does not empower federal agencies to directly assess AI models. This contrasts with a comparable bill put forth by Senators Josh Hawley and Richard Blumenthal.

The senators’ proposal calls for the establishment of a government-operated evaluation program for sophisticated AI systems prior to their deployment.

Strategic Approach to Legislation

Lieu recognizes that his bill adopts a less stringent approach than the Senate version. However, he believes this increases its likelihood of enactment.

“My primary objective is to achieve legislative success during this term,” Lieu stated, acknowledging the openly expressed opposition to AI regulation from House Majority Leader Scalise.

He further explained his pragmatic strategy: “I am not crafting legislation based on ideal scenarios. My focus is on developing a bill that can garner approval from a Republican-led House, Senate, and White House.”

This approach prioritizes feasibility and compromise to navigate the current political landscape and advance AI regulation.

#AI regulation#artificial intelligence#federal government#state government#AI policy