Google Gemini Safety Concerns: Risks for Kids & Teens

Common Sense Media Assesses Google’s Gemini AI
On Friday, Common Sense Media, a nonprofit dedicated to children’s online safety, published its evaluation of the risks associated with Google’s Gemini AI products. The organization determined that while the AI clearly identifies itself as a computer program to young users – a crucial factor in preventing potential delusional thinking in vulnerable individuals – improvements are still necessary in several areas.
Tiered Systems and Underlying Architecture
A key finding was that both the “Under 13” and “Teen Experience” versions of Gemini appear to be based on the adult version of the AI. Additional safety features are layered on top, rather than being built into the core design. Common Sense Media argues that truly safe AI for children requires a foundation specifically designed with child safety as a primary consideration.
Exposure to Inappropriate Content
The analysis revealed that Gemini could still potentially expose children to content deemed “inappropriate and unsafe.” This includes information concerning sensitive topics like sex, drugs, alcohol, and potentially harmful mental health advice, which children may not be equipped to process.
Concerns Regarding Mental Health
This potential for exposure is particularly worrying given recent events. AI interactions have been implicated in several teenage suicides. OpenAI is currently facing a wrongful death lawsuit following the suicide of a 16-year-old who allegedly used ChatGPT to plan his actions, successfully circumventing the chatbot’s safety protocols. Similarly, Character.AI was previously sued in connection with a teen user’s death.
Apple’s Potential Integration of Gemini
The assessment coincides with reports suggesting Apple is evaluating Gemini as the large language model (LLM) to power its upcoming AI-enhanced Siri, expected next year. This integration could broaden the exposure of young users to these risks, unless Apple implements effective safety measures.
Lack of Age-Specific Guidance
Common Sense Media also noted that Gemini’s offerings for children and teenagers do not adequately differentiate guidance and information based on age. Consequently, both tiers received an “High Risk” rating, despite the implemented safety filters.
Expert Commentary
“Gemini gets some basics right, but it stumbles on the details,” stated Robbie Torney, Senior Director of AI Programs at Common Sense Media. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development.” Torney emphasized the necessity of designing AI with the specific needs and developmental stages of children in mind, rather than simply modifying adult-oriented products.
Google’s Response
Google responded to the assessment, acknowledging ongoing improvements to its safety features. The company maintains that it has specific policies and safeguards in place for users under 18 to prevent harmful outputs.
Red Teaming and Safeguard Implementation
Google also stated that it employs “red-teaming” exercises and consults with external experts to enhance its protective measures. However, the company conceded that some Gemini responses were not functioning as intended and that additional safeguards have been implemented to address these issues.
Discrepancies in Reporting
Google pointed out that, like Common Sense Media noted, safeguards are in place to prevent the AI from simulating genuine relationships. Furthermore, Google suggested that the report may have referenced features unavailable to users under 18, but it lacked access to the specific questions used during Common Sense Media’s testing to confirm this.
Comparative Risk Assessments
Common Sense Media has previously evaluated other AI services, including those from OpenAI, Perplexity, Claude, and Meta AI. Meta AI and Character.AI were deemed “unacceptable” due to severe risk levels. Perplexity was categorized as high risk, ChatGPT as moderate risk, and Claude (intended for users 18 and older) was considered minimal risk.
Related Posts

Google's New AI Agent vs. OpenAI GPT-5.2: A Deep Dive

Disney Cease and Desist: Google Faces Copyright Infringement Claim

OpenAI Responds to Google with GPT-5.2 After 'Code Red' Memo

Google Disco: Build Web Apps from Browser Tabs with Gemini

Waymo Baby Delivery: Birth in Self-Driving Car
