Meta Suppressed Children’s Safety Research, Whistleblowers Allege

Meta Accused of Suppressing Research on Child Safety
Reports indicate that four individuals – two current and two former employees of Meta – have provided documents to Congress. These documents suggest potential suppression of research concerning the safety of children online.
The allegations center around policy alterations implemented by Meta approximately six weeks following the disclosures made by Frances Haugen, a former whistleblower. Haugen’s leaked internal documents in 2021 revealed findings that Instagram could negatively impact the mental health of adolescent girls.
Changes to Research Policies
According to the report, Meta introduced changes to its policies regarding research into sensitive areas. These areas included topics such as politics, children, gender, race, and online harassment.
Two methods were proposed to mitigate risks associated with conducting research on these sensitive subjects. Researchers were advised to involve legal counsel in their work, thereby protecting communications under attorney-client privilege. Alternatively, they could phrase their findings in a less direct manner, avoiding potentially problematic terminology like “non-compliant” or “illegal.”
Specific Allegations of Interference
Jason Sattizahn, a former researcher at Meta specializing in virtual reality, stated that he was instructed by his supervisor to remove recordings of an interview. This interview featured a teenager recounting an incident where his ten-year-old brother was subjected to inappropriate advances on Meta’s Horizon Worlds platform.
A Meta spokesperson addressed privacy concerns, stating to TechCrunch: “Global privacy regulations mandate the deletion of information collected from minors under 13 without verifiable parental or guardian consent.”
Whistleblower Concerns and Meta’s Response
The whistleblowers contend that the submitted documents demonstrate a recurring pattern of discouraging employees from discussing and investigating concerns related to the use of Meta’s social virtual reality applications by children under the age of 13.
Meta responded to these claims, asserting that the allegations are being selectively presented to support a false narrative. The company stated that nearly 180 studies related to Reality Labs, including those focused on youth safety and well-being, have been approved since the beginning of 2022.
Lawsuit and Further Allegations
Kelly Stonelake, a former Meta employee with 15 years of service, filed a lawsuit in February raising similar concerns. She reported to TechCrunch that she oversaw the launch of Horizon Worlds to teenagers, international markets, and mobile users.
Stonelake expressed concerns about the app’s insufficient safeguards against users under 13 and highlighted ongoing issues with racism within the platform.
The lawsuit alleges that, during testing, users with Black avatars were subjected to racial slurs, including derogatory terms, within an average of 34 seconds of entering the platform.
Separately, Stonelake has filed a lawsuit against Meta alleging sexual harassment and gender discrimination.
Broader Concerns Regarding AI and Child Safety
Criticism extends beyond Meta’s VR products to encompass other offerings, such as AI chatbots. Reuters reported last month that Meta’s previous AI guidelines permitted chatbots to engage in “romantic or sensual” conversations with minors.
This raises further questions about Meta’s commitment to protecting children online and the potential risks associated with its various platforms.
Related Posts

EU Antitrust Probe: Google's AI Search Tools Under Investigation

Microsoft to Invest $17.5B in India by 2029 - AI Expansion

India to Charge OpenAI, Google for AI Training on Copyrighted Data

Nvidia H200 Chip Exports to China Approved by US Commerce Department

Trump Vows to Block State AI Laws with Executive Order
