anthropic endorses california’s ai safety bill, sb 53

Anthropic Backs Landmark California AI Bill
On Monday, Anthropic publicly supported SB 53, a California legislative proposal spearheaded by state senator Scott Wiener. This bill seeks to establish unprecedented transparency obligations for developers of the most powerful AI models. The endorsement from Anthropic represents a significant development for SB 53, particularly given opposition from major technology industry groups such as the Consumer Technology Association (CTA) and Chamber of Progress.
Addressing AI Governance
Anthropic articulated its position in a blog post, stating that while federal-level regulation of frontier AI safety is preferable, the rapid pace of AI advancements necessitates immediate action. The company emphasized that the central question is not *if* AI governance is needed, but rather whether it will be developed proactively or reactively. SB 53, according to Anthropic, provides a viable path toward proactive governance.
Key Provisions of SB 53
Should it become law, SB 53 would mandate that developers of leading AI models – including OpenAI, Anthropic, Google, and xAI – formulate comprehensive safety frameworks. Furthermore, these developers would be required to release public reports detailing the safety and security measures implemented before deploying their advanced AI systems. The bill also incorporates provisions for whistleblower protection, safeguarding employees who raise safety concerns.
Defining Catastrophic Risk
Senator Wiener’s bill specifically targets the prevention of “catastrophic risks,” defined as events resulting in the loss of at least 50 lives or damages exceeding one billion dollars. The focus of SB 53 is on mitigating extreme AI risks, such as the potential for AI to aid in the creation of biological weapons or facilitate large-scale cyberattacks. It deliberately avoids addressing more immediate concerns like AI-generated deepfakes or biased outputs.
Legislative Progress and Gubernatorial Response
A previous iteration of SB 53 was approved by the California Senate, but a final vote is still required before it can be sent to the governor. Governor Gavin Newsom has yet to publicly comment on the bill, despite having vetoed a prior AI safety bill, SB 1047, proposed by Senator Wiener.
Industry Opposition and Federal Concerns
Efforts to regulate frontier AI developers have encountered substantial resistance from both Silicon Valley and the Trump administration. Arguments against such regulation center on the potential to hinder American innovation in the competitive landscape with China. Investors, including Andreessen Horowitz and Y Combinator, actively opposed SB 1047, and the Trump administration has threatened to preempt state-level AI regulation.
The Commerce Clause Debate
A common argument against state AI safety bills is that they should be addressed at the federal level. Matt Perault and Jai Ramaswamy of Andreessen Horowitz recently argued that many current state AI bills may violate the Constitution’s Commerce Clause, which restricts states from enacting laws that extend beyond their borders and impede interstate commerce.
Anthropic's Perspective on Timely Action
However, Jack Clark, co-founder of Anthropic, contends that the technology industry is poised to develop powerful AI systems in the near future and cannot afford to wait for federal action. He believes SB 53 provides a valuable framework for AI governance that should not be disregarded.
OpenAI's Concerns and Counterarguments
Chris Lehane, OpenAI’s chief global affairs officer, sent a letter to Governor Newsom in August, expressing concerns that AI regulation could drive startups away from California. Notably, the letter did not specifically mention SB 53. Miles Brundage, OpenAI’s former head of policy research, criticized Lehane’s letter as containing “misleading garbage” regarding SB 53 and AI policy.
Scope of Regulation
It’s important to note that SB 53 is designed to regulate only the largest AI companies – specifically those with gross revenues exceeding $500 million.
A More Moderate Approach
Despite the debate, policy experts suggest that SB 53 represents a more measured approach compared to previous AI safety bills. Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI policy adviser, expressed optimism about the bill’s chances of becoming law, praising its “respect for technical reality” and “legislative restraint.”
Influence of Expert Panel
Senator Wiener has stated that SB 53 was significantly shaped by an expert policy panel convened by Governor Newsom. This panel was co-led by Fei-Fei Li, a prominent Stanford researcher and co-founder of World Labs, and tasked with advising California on AI regulation.
Existing Safety Practices and Legal Enforcement
Many AI labs already maintain internal safety policies similar to those required by SB 53. Companies like OpenAI, Google DeepMind, and Anthropic routinely publish safety reports for their models. However, these commitments are currently self-imposed, and compliance can be inconsistent. SB 53 aims to establish these requirements as legally enforceable state law, with potential financial penalties for non-compliance.
Recent Amendments to the Bill
In September, California lawmakers amended SB 53 to remove a provision requiring third-party audits of AI model developers. Tech companies have previously opposed such audits in other policy debates, citing their perceived burden.
Related Posts

openai says it’s turned off app suggestions that look like ads

pat gelsinger wants to save moore’s law, with a little help from the feds

ex-googler’s yoodli triples valuation to $300m+ with ai built to assist, not replace, people

sources: ai synthetic research startup aaru raised a series a at a $1b ‘headline’ valuation

meta acquires ai device startup limitless
