LOGO

a california bill that would regulate ai companion chatbots is close to becoming law

September 11, 2025
a california bill that would regulate ai companion chatbots is close to becoming law

California Poised to Regulate AI Companion Chatbots

A significant move towards regulating artificial intelligence is underway in California. Senate Bill 243 (SB 243), designed to govern AI companion chatbots and safeguard minors and vulnerable individuals, has successfully passed both the State Assembly and Senate with bipartisan backing. The bill now awaits consideration by Governor Gavin Newsom.

Potential Law and Implementation Timeline

Governor Newsom has until October 12th to either approve or veto the legislation. Should he sign SB 243 into law, it will become effective on January 1, 2026. This would position California as the first state to mandate safety protocols for AI companions and establish legal accountability for companies whose chatbots fail to adhere to these standards.

Focus on Preventing Harmful Interactions

The core objective of the bill is to prevent companion chatbots – defined as AI systems offering adaptive, human-like interactions and fulfilling users’ social requirements – from engaging in discussions concerning suicidal thoughts, self-harm, or sexually explicit content.

User Awareness and Transparency

The legislation stipulates that platforms must deliver frequent reminders to users – specifically, every three hours for minors – clarifying that they are interacting with an AI chatbot and not a human being. Users will also be prompted to take breaks from conversations. Furthermore, the bill introduces annual reporting and transparency obligations for AI companies, including prominent entities like OpenAI, Character.AI, and Replika, beginning July 1, 2027.

Legal Recourse for Affected Individuals

Individuals who believe they have suffered harm due to violations of the bill’s provisions will be granted the right to pursue legal action against AI companies. Potential remedies include injunctive relief, damages – capped at $1,000 per violation – and reimbursement of attorney’s fees.

Origins of the Bill and Triggering Events

SB 243 was initially introduced in January by state senators Steve Padilla and Josh Becker. Its momentum increased following the tragic suicide of teenager Adam Raine, who had engaged in extensive conversations with OpenAI’s ChatGPT regarding his death and self-harm. The legislation also addresses leaked internal documents suggesting Meta’s chatbots were permitted to participate in “romantic” and “sensual” exchanges with children.

Broader Scrutiny of AI Safeguards

In recent weeks, both U.S. lawmakers and regulatory bodies have intensified their examination of the safeguards implemented by AI platforms to protect minors. The Federal Trade Commission is preparing an investigation into the impact of AI chatbots on children’s mental well-being. Texas Attorney General Ken Paxton has initiated investigations into Meta and Character.AI, alleging deceptive practices related to mental health claims. Separate probes into Meta have also been launched by Senators Josh Hawley (R-MO) and Ed Markey (D-MA).

Senator Padilla’s Perspective

“I think the harm is potentially great, which means we have to move quickly,” Senator Padilla stated to TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”

Importance of Data Sharing

Padilla also emphasized the necessity for AI companies to share data regarding the frequency with which they direct users to crisis intervention services annually. This data collection would provide a clearer understanding of the problem’s prevalence, rather than solely becoming aware of issues after harm has occurred.

Amendments and Refinements to the Bill

SB 243 initially contained more stringent requirements, but several were modified through amendments. For instance, the original bill would have prohibited AI chatbots from employing “variable reward” tactics or other features designed to encourage prolonged engagement. These tactics, utilized by companies like Replika and Character, offer users incentives such as special messages or unlockable content, potentially creating an addictive cycle.

Removed Provisions

The current version of the bill also eliminates provisions that would have mandated operators to track and report instances where chatbots initiated discussions about suicidal ideation or actions with users.

Senator Becker’s Rationale

“I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” Senator Becker explained to TechCrunch.

Political Context and Lobbying Efforts

The advancement of SB 243 coincides with substantial investments by Silicon Valley companies into pro-AI political action committees (PACs) aimed at supporting candidates in the upcoming elections who advocate for a less restrictive approach to AI regulation.

Concurrent Legislation: SB 53

California is also considering another AI safety bill, SB 53, which would require comprehensive transparency reporting. OpenAI has issued an open letter to Governor Newsom, urging him to reject SB 53 in favor of less demanding federal and international standards. Major tech companies, including Meta, Google, and Amazon, have also voiced opposition to SB 53, while Anthropic is the only company to publicly express support.

Padilla’s View on Innovation and Regulation

“I reject the premise that this is a zero-sum situation, that innovation and regulation are mutually exclusive,” Padilla asserted. “Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits — and there are benefits to this technology, clearly — and at the same time, we can provide reasonable safeguards for the most vulnerable people.”

Company Responses

A spokesperson for Character.AI informed TechCrunch that the startup already incorporates prominent disclaimers throughout the user experience, emphasizing that interactions should be treated as fictional. A Meta spokesperson declined to provide a comment.

Ongoing Outreach

TechCrunch has contacted OpenAI, Anthropic, and Replika for statements.

#AI chatbot regulation#California AI bill#AI companion#chatbot law#artificial intelligence#California law