LOGO

silicon valley spooks the ai safety advocates

October 18, 2025
silicon valley spooks the ai safety advocates

Silicon Valley Disputes with AI Safety Advocates Escalate

Prominent figures in Silicon Valley, including David Sacks, the White House AI and Crypto Czar, and Jason Kwon, OpenAI’s chief strategy officer, have recently sparked debate with their assertions regarding organizations dedicated to AI safety. They have suggested that some proponents of AI safety may not be entirely motivated by altruism, potentially acting on behalf of personal interests or the influence of wealthy individuals.

Allegations of Intimidation and Regulatory Capture

Groups focused on AI safety have responded to these claims, characterizing them as the latest in a series of attempts by Silicon Valley to suppress criticism. In 2024, concerns were raised about venture capital firms spreading misinformation regarding a proposed California bill, SB 1047, falsely suggesting it could lead to imprisonment for startup founders.

Despite the Brookings Institution debunking these rumors as “misrepresentations,” the bill was ultimately vetoed by Governor Gavin Newsom. The actions of Sacks and OpenAI have reportedly caused apprehension among several AI safety advocates, many of whom requested anonymity when speaking with TechCrunch to avoid potential repercussions.

Tensions Between Responsible Development and Commercialization

This situation highlights a growing conflict within Silicon Valley: the balance between developing AI responsibly and rapidly deploying it as a large-scale consumer product. This central theme was discussed in detail on the recent Equity podcast featuring Kirsten Korosec, Anthony Ha, and other colleagues.

The podcast also examined a newly enacted AI safety law in California designed to regulate chatbots, as well as OpenAI’s policies concerning explicit content within ChatGPT.

Specific Accusations and Responses

On Tuesday, Sacks publicly accused Anthropic of employing fear tactics to promote legislation that would benefit its own position and create obstacles for smaller startups. This followed a speech by Anthropic co-founder Jack Clark expressing his concerns about the potential risks of AI.

Sacks characterized Anthropic’s approach as a “sophisticated regulatory capture strategy,” while also noting the company’s historically adversarial relationship with the Trump administration.

OpenAI Subpoenas and Transparency Concerns

Simultaneously, Jason Kwon of OpenAI explained the company’s decision to issue subpoenas to AI safety nonprofits, including Encode. This action stemmed from Elon Musk’s lawsuit against OpenAI, alleging a deviation from its original nonprofit mission.

OpenAI reportedly found it suspicious that several organizations simultaneously voiced opposition to its restructuring. Encode filed a brief supporting Musk’s lawsuit, and other nonprofits publicly criticized the changes. Kwon stated that the subpoenas were issued to investigate funding sources and potential coordination among these groups.

Internal Discord at OpenAI

NBC News reported that OpenAI’s subpoenas targeted Encode and six other nonprofits, requesting communications related to Musk and Meta CEO Mark Zuckerberg, as well as support for SB 53. Sources indicate a potential divide within OpenAI, with its research division frequently publishing reports on AI risks while its policy team actively lobbied against SB 53, favoring federal regulations.

Joshua Achiam, OpenAI’s head of mission alignment, publicly expressed his discomfort with the subpoenas, stating, “this doesn’t seem great.”

Broader Perspectives on the Controversy

Brendan Steinhauser, CEO of the Alliance for Secure AI, suggested that OpenAI believes its critics are part of a conspiracy orchestrated by Musk. He countered this claim, asserting that much of the AI safety community is critical of xAI’s safety practices.

Steinhauser believes OpenAI’s actions are intended to silence dissent and discourage other nonprofits from speaking out, while Sacks is concerned about the growing influence of the AI safety movement and increased accountability for tech companies.

Calls for Real-World Engagement and Public Opinion

Sriram Krishnan, a senior policy advisor for AI at the White House, urged AI safety organizations to engage with individuals directly impacted by AI, including those using, selling, and adopting the technology.

Recent studies reveal that approximately half of Americans express more concern than excitement about AI, though the specific anxieties remain unclear. Further research indicates that voters are more worried about job displacement and deepfakes than catastrophic AI risks, which are a primary focus of the AI safety movement.

The Future of AI Regulation

Addressing safety concerns may potentially hinder the rapid growth of the AI industry, a prospect that concerns many in Silicon Valley, given AI investment’s significant contribution to the American economy. However, the AI safety movement is gaining momentum as 2026 approaches.

Silicon Valley’s efforts to counter these safety-focused groups could be an indication that the movement is achieving its goals.

#AI safety#artificial intelligence#Silicon Valley#AI risks#AI ethics