LOGO

California AI Safety Bill: SB 1047 Push Continues

July 9, 2025
California AI Safety Bill: SB 1047 Push Continues

California Bill SB 53 Aims for AI Transparency

State Senator Scott Wiener of California introduced revised amendments to Senate Bill 53 on Wednesday. The proposed legislation mandates that major artificial intelligence (AI) companies disclose their safety and security protocols. It also requires the issuance of reports whenever safety incidents are detected.

First-of-its-Kind Transparency Requirements

If enacted, California would become the initial state to implement substantial transparency obligations for prominent AI developers. This likely encompasses companies like OpenAI, Google, Anthropic, and xAI.

Previous Bill and Governor's Response

Senator Wiener’s earlier AI bill, SB 1047, contained comparable stipulations for AI model developers to release safety reports. However, strong opposition from Silicon Valley led to its veto by Governor Gavin Newsom. Following the veto, Governor Newsom established a policy group, including Stanford researcher Fei-Fei Li, to formulate goals for the state’s AI safety initiatives.

Influence of AI Policy Group Recommendations

California’s AI policy group recently released its final recommendations. These emphasized the necessity for “requirements on industry to publish information about their systems” to foster a “robust and transparent evidence environment.” Senator Wiener’s office confirmed that the amendments to SB 53 were significantly shaped by this report.

Senator Wiener's Statement

“The bill continues to be a work in progress,” Senator Wiener stated in a press release. “I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be.”

Balancing Transparency and Growth

SB 53 seeks to achieve a balance that SB 1047 reportedly lacked – establishing meaningful transparency for leading AI developers without hindering the expansion of California’s AI sector.

Industry Reaction

Nathan Calvin, VP of State Affairs for the AI safety nonprofit Encode, commented, “Having companies explain to the public and government what measures they’re taking to address these risks feels like a bare minimum, reasonable step to take.” This statement was made in an interview with TechCrunch.

Whistleblower Protections

The bill also establishes protections for employees of AI labs who raise concerns about technology posing a “critical risk” to society. This risk is defined as contributing to the death or injury of over 100 individuals, or causing more than $1 billion in damages.

Creation of CalCompute

Furthermore, SB 53 proposes the creation of CalCompute, a public cloud computing cluster designed to support AI development by startups and researchers.

Differences from SB 1047

Unlike its predecessor, SB 1047, Senator Wiener’s revised bill does not hold AI model developers liable for the consequences of their AI models. It is also designed to avoid placing undue burdens on startups and researchers who refine existing AI models or utilize open-source options.

Next Steps for SB 53

With the recent amendments, SB 53 will now be reviewed by the California State Assembly Committee on Privacy and Consumer Protection. Approval there will be followed by further legislative review before reaching Governor Newsom.

Similar Legislation in New York

In New York, Governor Kathy Hochul is currently evaluating a comparable AI safety bill, the RAISE Act, which also mandates safety and security reports from large AI developers.

Federal Moratorium Attempt Fails

A proposed 10-year federal moratorium on state AI regulation, intended to prevent a fragmented regulatory landscape, recently failed in the Senate with a vote of 99-1.

Call for State Leadership

Geoff Ralston, former president of Y Combinator, stated, “Ensuring AI is developed safely should not be controversial — it should be foundational.” He further urged Congress to take the lead, but acknowledged the need for states to act in the absence of federal action, citing California’s SB 53 as a “thoughtful, well-structured example of state leadership.”

Industry Resistance to Transparency

Despite Anthropic’s general support for increased transparency, companies like OpenAI, Google, and Meta have demonstrated greater reluctance towards these requirements.

Inconsistent Safety Report Publication

While leading AI developers typically publish safety reports, consistency has been lacking. Google, for instance, delayed publishing a safety report for its Gemini 2.5 Pro model, and OpenAI similarly withheld a report for its GPT-4.1 model. A subsequent third-party study suggested potential alignment issues with the latter.

A More Moderate Approach

SB 53 represents a more moderate approach compared to previous AI safety bills. However, it still has the potential to compel AI companies to disclose more information than they currently do. Senator Wiener is once again poised to test the boundaries of AI regulation.

#AI safety#SB 1047#California#AI regulation#artificial intelligence#safety reports