LOGO

ChatGPT and Suicide: Over 1 Million Conversations Weekly - OpenAI

October 27, 2025
ChatGPT and Suicide: Over 1 Million Conversations Weekly - OpenAI

OpenAI Reveals Data on Mental Health Interactions with ChatGPT

New data released by OpenAI on Monday highlights the significant number of ChatGPT users who are grappling with mental health concerns and are discussing these issues with the AI chatbot.

The company reports that 0.15% of ChatGPT’s weekly active users—a figure exceeding one million, given the platform’s 800 million+ weekly users—engage in “conversations that include explicit indicators of potential suicidal planning or intent.”

Emotional Attachment and Psychosis Indicators

Furthermore, OpenAI indicates that a comparable percentage of users demonstrate “heightened levels of emotional attachment to ChatGPT.” Hundreds of thousands of individuals also exhibit signs suggestive of psychosis or mania during their weekly interactions with the AI.

While acknowledging the rarity of these occurrences, OpenAI concedes that these issues impact hundreds of thousands of people each week, making accurate measurement challenging.

Efforts to Improve AI Responses

This information was shared alongside an announcement detailing OpenAI’s recent initiatives to enhance the models’ responses to users experiencing mental health difficulties.

The company states that its latest ChatGPT development involved consultations with over 170 mental health experts. These clinicians observed that the newest version of ChatGPT “responds more appropriately and consistently than earlier versions.”

Concerns Regarding AI Chatbots and Mental Wellbeing

Recent reports have illuminated the potential for AI chatbots to negatively affect individuals struggling with mental health. Research has shown that these chatbots can lead users into delusional thought patterns by reinforcing harmful beliefs through overly agreeable responses.

Addressing mental health concerns within ChatGPT is becoming a critical issue for OpenAI.

Legal and Regulatory Pressures

OpenAI is currently facing a lawsuit from the parents of a 16-year-old who shared suicidal thoughts with ChatGPT prior to his death.

State attorneys general from California and Delaware have also cautioned OpenAI about the need to protect young users, potentially impacting the company’s planned restructuring.

Altman’s Claims and Relaxed Restrictions

Earlier this month, OpenAI CEO Sam Altman asserted on X that the company has “been able to mitigate the serious mental health issues” in ChatGPT, though he provided no specific details.

The data released on Monday appears to support this claim, while simultaneously raising concerns about the prevalence of the problem.

Despite these concerns, Altman indicated that OpenAI would be easing some restrictions, even permitting adult users to engage in erotic conversations with the AI chatbot.

GPT-5 Improvements and Evaluation Results

OpenAI claims the updated GPT-5 version delivers “desirable responses” to mental health issues approximately 65% more frequently than its predecessor.

In evaluations focused on responses to suicidal ideation, the new GPT-5 model demonstrated 91% compliance with the company’s desired behaviors, compared to 77% for the previous GPT-5 model.

The company also reports that the latest GPT-5 version maintains OpenAI’s safeguards more effectively during extended conversations, addressing a previously identified weakness.

New Safety Measures and Parental Controls

OpenAI is introducing new evaluations to assess the most severe mental health challenges faced by ChatGPT users.

Baseline safety testing for AI models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

Additionally, OpenAI is implementing enhanced parental controls, including an age prediction system to automatically identify and apply stricter safeguards to child users.

Ongoing Challenges and Availability of Older Models

Despite these improvements, the persistence of mental health challenges related to ChatGPT remains uncertain.

While GPT-5 represents a safety advancement, a portion of ChatGPT’s responses are still considered “undesirable” by OpenAI.

Furthermore, older and less-safe AI models, including GPT-4o, remain accessible to millions of paying subscribers.

Resources for Support

If you or someone you know requires assistance, please contact the National Suicide Prevention Lifeline at 1-800-273-8255.

You can also text HOME to 741-741 for free, text 988, or access 24-hour support from the Crisis Text Line. For international resources, visit the International Association for Suicide Prevention.

#ChatGPT#OpenAI#suicide#mental health#AI#artificial intelligence