instagram to show pg-13 content by default to teens, adds more parental controls

Instagram Enhances Teen Safety with New Content Restrictions
To better safeguard younger users, Instagram is implementing new limitations on accounts belonging to individuals under the age of 18. By default, these accounts will now be shown content suitable for a PG-13 audience, effectively filtering out potentially harmful material.
This means content featuring extreme violence, explicit sexual depictions, or graphic depictions of drug use will be restricted from the default feeds of teenage users.
Parental Approval for Content Setting Changes
Teen users will be unable to modify these default content settings independently. Explicit consent from a parent or legal guardian will be required before any changes can be made.
Introducing "Limited Content" Filtering
Instagram is also introducing a more robust content filter, termed "Limited Content." This filter will restrict teens from both viewing and posting comments on posts where this setting has been activated.
Further restrictions are planned for next year, impacting the types of interactions teens can have with AI chatbots that utilize the "Limited Content" filter. Currently, the PG-13 content settings are already being applied to AI-driven conversations.
Responding to Concerns About AI Chatbots
These changes are occurring amidst legal challenges faced by chatbot developers, including OpenAI and Character.AI, regarding potential harm to users. OpenAI recently implemented new restrictions for ChatGPT users under 18, focusing on preventing “flirtatious talk.”
Character.AI has also introduced new limitations and parental controls earlier this year.
Expanded Controls Across the Platform
Instagram has been actively developing teen safety tools across various platform features, including accounts, direct messages, search, and content discovery. These efforts are now being expanded with additional controls and restrictions specifically for underage users.
Teenagers will be prevented from following accounts that share age-inappropriate content. If a teen is already following such an account, they will be unable to view its posts or interact with it, and reciprocally, the account will be unable to interact with the teen.
Furthermore, the platform is removing such accounts from recommendation algorithms, making them more difficult to locate.
Blocking Inappropriate Content in Direct Messages
The company is also implementing measures to block teenagers from viewing inappropriate content that is shared with them through direct messages.
Strengthening Restrictions on Harmful Keywords
Meta already restricts the discovery of content related to eating disorders and self-harm for teen accounts. Now, the platform is expanding these restrictions to block terms like “alcohol” and “gore.”
Efforts are also underway to prevent teens from finding content related to these categories through misspellings.
New Parental Flagging System
A new system is being tested that will allow parents to flag content they believe is unsuitable for teens, utilizing existing supervision tools. Flagged content will then be reviewed by a dedicated team.
Global Rollout
These changes are initially being rolled out in the United States, United Kingdom, Australia, and Canada, with a planned global expansion in the coming year.
Related Posts

after you check out your spotify wrapped 2025, explore these copycats

meta acquires ai device startup limitless

chatgpt’s user growth has slowed, report finds

meta signs commercial ai data agreements with publishers to offer real-time news on meta ai

in its first dsa penalty, eu fines x €120m for ‘deceptive’ blue check verification system
