california becomes first state to regulate ai companion chatbots

California Leads the Nation in AI Chatbot Regulation
A groundbreaking bill was signed into law by California Governor Gavin Newsom on Monday, establishing regulations for AI companion chatbots. This action positions California as the first state in the US to mandate safety protocols for operators of these AI systems.
Protecting Users from Potential Harms
SB 243, the newly enacted legislation, aims to safeguard children and vulnerable individuals from the potential dangers associated with utilizing AI companion chatbots. The law introduces legal accountability for companies – encompassing major players such as Meta and OpenAI, as well as specialized startups like Character AI and Replika – should their chatbots fail to adhere to the stipulated standards.
The introduction of SB 243 in January by Senators Steve Padilla and Josh Becker was significantly influenced by the tragic death of teenager Adam Raine. His suicide followed extensive conversations with OpenAI’s ChatGPT, where he expressed suicidal ideation. Furthermore, leaked internal documents revealed that Meta’s chatbots were permitted to engage in interactions described as “romantic” and “sensual” with minors.
Recently, a family in Colorado initiated legal action against Character AI after their 13-year-old daughter died by suicide following a series of troubling and sexually suggestive exchanges with the company’s chatbots.
Governor Newsom’s Statement
“Technology, including chatbots and social media, has the potential to inspire, educate, and connect people,” Newsom stated. “However, without appropriate safeguards, it can also be used to exploit, deceive, and put our children at risk.”
He continued, “We have witnessed deeply disturbing and tragic instances of young people being harmed by unregulated technology, and we will not remain passive while companies operate without essential limitations and accountability. California can continue to be a leader in AI and technology, but this must be done responsibly, with the safety of our children as the top priority.”
Key Provisions of SB 243
The law will take effect on January 1, 2026. It requires companies to implement features like age verification and clear warnings concerning social media and companion chatbots.
SB 243 also strengthens penalties for those who profit from illegal deepfakes, potentially reaching up to $250,000 per violation. Companies are also obligated to establish protocols for addressing suicide and self-harm, sharing this information and related statistics with the state’s Department of Public Health.
The bill mandates that platforms clearly indicate that interactions are artificially generated. Chatbots are prohibited from presenting themselves as healthcare professionals.
Furthermore, companies must provide break reminders to minors and prevent them from accessing sexually explicit content generated by the chatbots.
Industry Response and Current Safeguards
Several companies have already begun implementing safeguards, particularly focused on protecting children. OpenAI has recently introduced parental controls, content protections, and a self-harm detection system for young ChatGPT users.
Replika, designed for users 18 and older, stated it allocates “significant resources” to safety through content-filtering systems and directs users to crisis resources. The company is committed to complying with existing and future regulations.
Character AI maintains that its chatbot includes a disclaimer clarifying that all chats are AI-generated and fictional. A spokesperson for Character AI expressed the company’s willingness to collaborate with regulators and lawmakers as they develop legislation, and to comply with laws like SB 243.
Looking Ahead
Senator Padilla described the bill as “a step in the right direction” towards establishing guardrails for “an incredibly powerful technology.”
“It is crucial that we act swiftly to capitalize on opportunities before they pass,” Padilla emphasized. “I hope other states will recognize the risks involved, and I believe many already do. This is a nationwide conversation, and I urge people to take action. The federal government has yet to respond, and we have a responsibility to protect the most vulnerable among us.”
California’s Broader AI Regulation Efforts
SB 243 represents the second significant AI regulation enacted in California in recent weeks. On September 29, Governor Newsom signed SB 53 into law, establishing new transparency requirements for large AI companies.
This bill requires major AI labs – including OpenAI, Anthropic, Meta, and Google DeepMind – to be transparent about their safety protocols and provides whistleblower protections for their employees.
Other states, such as Illinois, Nevada, and Utah, have also passed legislation restricting or banning the use of AI chatbots as substitutes for licensed mental health care.
TechCrunch has contacted Meta and OpenAI for comment.
This article has been updated with comments from Senator Padilla, Character AI, and Replika.
Related Posts

openai says it’s turned off app suggestions that look like ads

pat gelsinger wants to save moore’s law, with a little help from the feds

ex-googler’s yoodli triples valuation to $300m+ with ai built to assist, not replace, people

sources: ai synthetic research startup aaru raised a series a at a $1b ‘headline’ valuation

meta acquires ai device startup limitless
