meta updates chatbot rules to avoid inappropriate topics with teen users

Meta Enhances AI Chatbot Safety Measures for Teen Users
Following concerns raised by an investigative report, Meta has announced adjustments to its AI chatbot training protocols. The primary focus of these changes is to bolster the safety of teenage users, as confirmed by a company spokesperson in an exclusive statement to TechCrunch.
New Training Protocols
The company is now implementing training that will prevent its chatbots from discussing sensitive topics with users under the age of 18. Specifically, conversations relating to self-harm, suicide, disordered eating, and potentially unsuitable romantic interactions will be avoided.
Meta characterizes these adjustments as preliminary steps. More comprehensive and enduring safety enhancements for younger users are slated for release in the coming months.
Acknowledgement of Previous Shortcomings
Stephanie Otway, a Meta representative, conceded that the company’s chatbots were previously capable of engaging in discussions on these topics, based on what Meta had initially considered acceptable parameters. The company now acknowledges this approach was an oversight.
“As our user base expands and technology advances, we are continuously gaining insights into how young individuals interact with these tools,” Otway stated. “Consequently, we are strengthening our protective measures. This includes training our AIs to refrain from engaging with teens on these sensitive subjects, instead directing them to appropriate expert resources.”
Restricted Access to AI Characters
In addition to the training updates, Meta will be limiting teen access to certain AI characters. Some user-created characters available on Instagram and Facebook have been identified as potentially inappropriate, including those with sexualized themes like “Step Mom” and “Russian Girl.”
Teen users will now only be able to interact with AI characters designed to promote education and creativity, according to Otway.
Response to Recent Investigation
These policy changes arrive shortly after a Reuters investigation revealed an internal Meta document that appeared to allow chatbots to participate in sexual conversations with underage users. An example cited included the response, “Your youthful form is a work of art,” and further complimentary language.
The document also contained examples of how the AI should respond to requests for violent or sexual imagery involving public figures.
Meta maintains that the document was inconsistent with its overall policies and has since been revised. However, the report has triggered significant debate regarding potential risks to child safety.
Official Scrutiny
Following the report’s publication, Senator Josh Hawley (R-MO) initiated an official investigation into Meta’s AI policies. Furthermore, a coalition of 44 state attorneys general sent a letter to several AI companies, including Meta, stressing the importance of protecting children.
The letter expressed strong disapproval of the apparent disregard for children’s well-being and highlighted concerns that AI assistants were engaging in conduct potentially violating criminal laws.
Data and Future Outlook
Otway declined to disclose the number of Meta’s AI chatbot users who are minors. She also refrained from commenting on whether the company anticipates a decrease in its AI user base as a result of these changes.
Update
This article has been updated to reflect that the implemented changes are interim measures, with Meta planning further updates to its AI safety policies in the future (updated 10:35AM PT).
Related Posts

openai says it’s turned off app suggestions that look like ads

pat gelsinger wants to save moore’s law, with a little help from the feds

ex-googler’s yoodli triples valuation to $300m+ with ai built to assist, not replace, people

sources: ai synthetic research startup aaru raised a series a at a $1b ‘headline’ valuation

meta acquires ai device startup limitless
