scott wiener on his fight to make big tech disclose ai’s dangers

Addressing AI Risks: California's New Legislative Efforts
This isn't California State Senator Scott Wiener’s initial endeavor to tackle the potential hazards presented by artificial intelligence.
In 2024, Silicon Valley launched a substantial opposition campaign against his previously proposed AI safety bill, SB 1047. This bill aimed to hold technology companies accountable for potential harms stemming from their AI systems. Industry leaders voiced concerns that the legislation would hinder the growth of the AI sector within the United States. Ultimately, Governor Gavin Newsom vetoed the bill, citing comparable anxieties, and a prominent AI collective celebrated with a “SB 1047 Veto Party.” A participant remarked, “Thankfully, AI remains legal.”
A Revised Approach with SB 53
Senator Wiener has now reintroduced a new AI safety bill, SB 53, currently awaiting Governor Newsom’s signature or veto within the coming weeks. This iteration of the bill has garnered more support, or at least, doesn't appear to be facing the same level of resistance from Silicon Valley.
Anthropic has publicly endorsed SB 53 this month. A Meta spokesperson, Jim Cullinan, informed TechCrunch that the company supports AI regulation that strikes a balance between safety measures and continued innovation, stating that “SB 53 represents a step in that direction,” while acknowledging areas for potential refinement.
Dean Ball, a former White House AI policy advisor, conveyed to TechCrunch that SB 53 is a “win for pragmatic perspectives” and anticipates a strong likelihood of Governor Newsom’s approval.
Key Provisions of SB 53
If enacted, SB 53 would establish some of the first safety reporting obligations for major AI companies, including OpenAI, Anthropic, xAI, and Google. Currently, these companies are not legally required to disclose details regarding their AI system testing procedures. While many AI labs voluntarily release safety reports detailing potential misuse scenarios – such as the creation of bioweapons – these reports are discretionary and often inconsistent.
The bill mandates that leading AI labs – those generating over $500 million in revenue – publish safety reports for their most advanced AI models. Similar to SB 1047, the bill concentrates on the most severe potential AI risks: contributions to fatalities, cyberattacks, and the development of chemical weapons. Governor Newsom is also considering other bills addressing different types of AI risks, including engagement-optimization techniques used in AI companions.
SB 53 also creates secure reporting channels for employees within AI labs to communicate safety concerns to government authorities. Furthermore, it establishes a state-funded cloud computing cluster, CalCompute, to provide AI research resources independent of large technology corporations.
Why the Shift in Reception?
A key reason for the increased acceptance of SB 53 compared to SB 1047 is its less stringent nature. SB 1047 proposed holding AI companies liable for any harm caused by their models, while SB 53 primarily focuses on self-reporting and transparency. Additionally, SB 53 specifically targets the largest technology companies, rather than startups.
However, some within the tech industry maintain that AI regulation should be handled at the federal level. OpenAI recently communicated to Governor Newsom in a letter that AI labs should only be subject to federal standards. This position is particularly noteworthy coming from a company directly addressing a state governor.
Venture firm Andreessen Horowitz published a blog post suggesting that certain California bills might infringe upon the Constitution’s dormant Commerce Clause, which prevents states from unduly restricting interstate commerce.
Senator Wiener's Perspective
Senator Wiener counters these arguments by expressing a lack of confidence in the federal government’s ability to enact meaningful AI safety regulations, thus necessitating state-level action. He believes the Trump administration has been unduly influenced by the tech industry and views recent federal attempts to preempt state AI laws as a means of “rewarding his funders.”
The Trump administration has demonstrably shifted away from the Biden administration’s emphasis on AI safety, prioritizing growth instead. Vice President J.D. Vance, speaking at an AI conference in Paris, stated, “I’m not here this morning to discuss AI safety, which was the conference’s theme a few years ago. I’m here to talk about AI opportunity.”
Silicon Valley has welcomed this change, as evidenced by Trump’s AI Action Plan, which removed obstacles to building the infrastructure required for training and deploying AI models. Today, prominent tech CEOs are frequently seen meeting with President Trump at the White House or jointly announcing substantial data center investments.
Senator Wiener emphasizes the importance of California taking a leadership role in AI safety without stifling innovation.
Interview Excerpts with Senator Wiener
I recently spoke with Senator Wiener to discuss his experiences negotiating with Silicon Valley and his dedication to AI safety legislation. The following conversation has been lightly edited for clarity and conciseness.
Senator Wiener, during our previous conversation regarding SB 1047, you were awaiting Governor Newsom’s decision. Could you reflect on your journey to regulate AI safety over the past few years?
It’s been a challenging yet incredibly rewarding experience, and a significant learning opportunity. We’ve successfully raised awareness about the issue of AI safety, not only in California but also nationally and internationally.
We are dealing with a remarkably powerful new technology that is reshaping the world. The crucial questions are: how do we ensure it benefits humanity while minimizing risks? How do we foster innovation while remaining vigilant about public health and safety? These are vital – and potentially existential – conversations about the future. Both SB 1047 and now SB 53 have contributed to this dialogue about safe innovation.
Looking back at the last two decades of technological advancements, what have you learned about the importance of laws that hold Silicon Valley accountable?
I represent San Francisco, the epicenter of AI innovation. My district is immediately north of Silicon Valley itself, placing us right in the heart of it all. However, we’ve also witnessed how large tech companies – some of the wealthiest in history – have consistently obstructed federal regulation.
Seeing tech CEOs dining at the White House with an aspiring authoritarian is concerning. These are brilliant individuals who have amassed immense wealth, and many of my constituents work for them. It’s disheartening to witness the deals being made with Saudi Arabia and the United Arab Emirates, and the subsequent funding of Trump’s meme coin. These developments cause me significant concern.
I’m not anti-tech; I support tech innovation. It’s incredibly important. But this is an industry that shouldn’t be trusted to self-regulate or rely on voluntary commitments. This isn’t a criticism of individuals, but rather a recognition of the nature of capitalism, which can generate prosperity but also cause harm without sensible regulations to protect the public interest. In the context of AI safety, we’re striving to find that balance.
SB 53 focuses on the most catastrophic potential harms of AI – death, large-scale cyberattacks, and bioweapon creation. Why this specific focus?
The risks associated with AI are diverse, including algorithmic discrimination, job displacement, deepfakes, and scams. Various bills in California and elsewhere address these risks. SB 53 was not intended to be comprehensive and cover all AI-related risks. We are concentrating on a specific category of risk: catastrophic potential.
This focus emerged organically from discussions with individuals in the AI community in San Francisco – startup founders, frontline AI technologists, and those directly building these models. They approached me with the concern that this issue needed thoughtful attention.
Do you believe AI systems are inherently unsafe, or do they simply have the potential to cause death and massive cyberattacks?
I don’t believe they are inherently safe. I recognize that many people working in these labs are deeply committed to mitigating risks. Again, it’s not about eliminating risk entirely. Life inherently involves risk; unless you choose to live in isolation, you will encounter it. Even in the safest environment, unforeseen events can occur.
Is there a risk that some AI models could be used to inflict significant harm on society? Yes, and we know there are individuals who would exploit that potential. We should strive to make it more difficult for malicious actors to cause severe harm, and those developing these models should do the same.
Anthropic has expressed its support for SB 53. What are your interactions like with other industry stakeholders?
We’ve engaged with a wide range of parties: large corporations, small startups, investors, and academics. Anthropic has been constructive in its approach. While they didn’t formally endorse SB 1047 last year, they did express positive sentiments about certain aspects of the bill. I believe they concluded that, on balance, SB 53 was a worthwhile endeavor.
I’ve had conversations with large AI labs that are not actively supporting the bill, but they are not engaged in the same level of opposition as they were with SB 1047. This isn’t surprising, as SB 1047 was more focused on liability, while SB 53 emphasizes transparency. Startups have been less involved this year because the bill primarily targets the largest companies.
Do you feel pressure from the large AI political action committees (PACs) that have formed recently?
This is a consequence of the Citizens United ruling. The wealthiest companies can pour unlimited resources into these PACs to intimidate elected officials. While they have the legal right to do so, it doesn’t influence my policy decisions. I’ve faced attempts to undermine me throughout my career. Various groups have spent millions trying to discredit me, yet here I am. I’m committed to serving my constituents and working to improve my community, San Francisco, and the world.
What is your message to Governor Newsom as he considers whether to sign or veto this bill?
My message is that we listened to your concerns. You vetoed SB 1047 and provided a detailed and thoughtful explanation. You wisely established a working group that produced a strong report, and we carefully considered that report in crafting this bill. The governor outlined a path forward, and we followed it in an attempt to reach an agreement. I hope we have succeeded.
Related Posts

openai says it’s turned off app suggestions that look like ads

pat gelsinger wants to save moore’s law, with a little help from the feds

ex-googler’s yoodli triples valuation to $300m+ with ai built to assist, not replace, people

sources: ai synthetic research startup aaru raised a series a at a $1b ‘headline’ valuation

meta acquires ai device startup limitless
