LOGO

Character AI First Amendment Claim: Motion to Dismiss Explained

January 24, 2025
Character AI First Amendment Claim: Motion to Dismiss Explained

Character AI Faces Dismissal Motion in Teen Suicide Case

Character AI, a platform enabling users to interact with AI chatbots through roleplaying, is currently contesting a lawsuit. The suit was initiated by the parent of a teenager who tragically died by suicide, allegedly after developing an excessive reliance on the company’s technology.

Lawsuit Details and Initial Response

Megan Garcia filed the legal action in October against Character AI within the U.S. District Court for the Middle District of Florida, Orlando Division. Garcia asserts that her 14-year-old son, Sewell Setzer III, formed a strong emotional connection with an AI chatbot named “Dany.” He engaged in frequent text exchanges, ultimately leading to a detachment from his real-life relationships.

In response to Setzer’s death, Character AI announced the implementation of enhanced safety measures. These included improved systems for detecting, responding to, and intervening in conversations that breach its terms of service. However, Garcia is advocating for more substantial safeguards, potentially impacting the chatbots’ capacity to generate narratives and share personal stories.

First Amendment Defense

In a motion to dismiss the case, Character AI’s legal team argues that the platform is entitled to protection under the First Amendment. They draw a parallel to the legal protections afforded to computer code itself.

The filing states, “The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide.” It further contends that the nature of the communication – whether with an AI chatbot or a video game character – does not alter this First Amendment analysis.

User Rights and Section 230

Character AI’s counsel emphasizes that the motion is not a claim of the company’s own First Amendment rights. Instead, it posits that a successful lawsuit against the platform would infringe upon the First Amendment rights of its users.

The motion does not address potential protections under Section 230 of the Communications Decency Act. This federal law shields online platforms from liability for content posted by third parties. While some legal scholars suggest Section 230 may not extend to AI-generated content, the issue remains unresolved.

Concerns About Industry Impact

Counsel for Character AI also suggests that the plaintiff’s ultimate goal is to effectively “shut down” the platform and inspire legislation regulating similar technologies. They argue that a favorable outcome for the plaintiffs could create a “chilling effect” on Character AI and the broader generative AI sector.

The filing highlights that the lawsuit seeks “drastic changes” that would significantly restrict the ability of Character AI’s millions of users to engage in conversations with the platform’s characters.

Additional Lawsuits and Investigations

This lawsuit, which also names Alphabet – Character AI’s corporate backer – as a defendant, is one of several facing the company. Other cases allege that Character AI exposed a 9-year-old to inappropriate content and encouraged self-harm in a 17-year-old user.

In December, Texas Attorney General Ken Paxton announced an investigation into Character AI and 14 other tech companies. The investigation focuses on potential violations of state laws concerning online privacy and child safety.

The Growing AI Companion Industry

Character AI operates within a rapidly expanding market of AI companionship apps. The potential mental health implications of these apps are largely unexplored, with some experts voicing concerns about exacerbating loneliness and anxiety.

Recent Developments at Character AI

Founded in 2021 by Noam Shazeer, a Google AI researcher, Character AI was reportedly subject to a $2.7 billion “reverse acquihire” by Google. The company states it is continually working to enhance safety and moderation.

Recent safety enhancements include new tools, a dedicated AI model for teenage users, content blocking, and more visible disclaimers clarifying that the AI characters are not real individuals.

The platform has experienced leadership changes following the departure of Shazeer and co-founder Daniel De Freitas to Google. Erin Teague, a former YouTube executive, was appointed chief product officer, and Dominic Perella, previously general counsel, became interim CEO.

Character AI is currently testing games on the web to improve user engagement and retention.

Stay informed with TechCrunch’s AI newsletter! Sign up here to receive it weekly.

#Character AI#First Amendment#motion to dismiss#chatbot#AI#legal case