Instagram Protects Children's Accounts - New Safety Features

Instagram Enhances Safety Measures for Accounts Featuring Children
Meta has announced the implementation of enhanced security protocols for Instagram accounts managed by adults that predominantly showcase children. This update, detailed in a company blog post released on Wednesday, aims to bolster protection against potential abuse.
Stricter Message Settings and Content Filtering
Accounts fitting this criteria will be automatically transitioned to the application’s most restrictive messaging configurations. This adjustment is designed to curtail unwanted direct messages. Furthermore, the platform’s “Hidden Words” feature will be activated by default, filtering out potentially offensive commentary.
These changes will affect accounts operated by adults who frequently post images and videos of their children, as well as those managed by parents or representatives acting on behalf of child performers.
Addressing Potential Exploitation
Meta acknowledges that while the vast majority of these accounts are utilized responsibly, a minority may attempt to exploit them. This includes posting inappropriate or sexualized comments, or soliciting explicit content via direct messages – actions that directly contravene the platform’s established guidelines.
The company stated its intention to proactively hinder interactions between potentially harmful adults, such as those previously blocked by teenagers, and accounts primarily featuring children. Instagram will limit recommendations of suspicious adult accounts to these child-focused profiles, and vice versa.
Broader Context of Social Media Safety
This announcement arrives as Meta and Instagram continue to address growing concerns regarding the impact of social media on mental wellbeing. These concerns have been voiced by the U.S. Surgeon General and numerous state governments.
Some states have even considered, or enacted, legislation requiring parental consent for minors to access social media platforms.
Impact on Family Vloggers and “Kidfluencers”
The modifications will notably affect family vloggers and parents who operate accounts for “kidfluencers.” These accounts have faced scrutiny due to the inherent risks associated with publicly sharing children’s lives online.
Investigations, such as one conducted by The New York Times, have revealed instances where parents are cognizant of, or actively participate in, the exploitation of their children, potentially through the sale of images or clothing.
The NYT’s analysis of 5,000 parent-managed accounts uncovered 32 million connections to male followers.
Account Notifications and Privacy Reviews
Accounts subject to the enhanced safety settings will receive a prominent notification within their Instagram Feed. This notification will inform them of the updated security measures and encourage a review of their account privacy settings.
Removal of Inappropriate Accounts
Meta reports having removed approximately 135,000 Instagram accounts that were engaged in the sexualization of accounts primarily featuring children. Additionally, 500,000 Instagram and Facebook accounts linked to these removed accounts have also been deactivated.
New Safety Features for Teen Accounts
In parallel with these changes, Meta is introducing new safety features specifically designed for Teen Accounts, which offer built-in protections for younger users.
Enhanced Context and Reporting Tools
Teenagers will now have access to safety tips, reminding them to carefully evaluate profiles and consider the content they share. The month and year an account was created will also be displayed at the top of new chat windows.
Instagram has also implemented a combined block and report function, allowing users to simultaneously block and report unwanted accounts.
Empowering Teens to Identify Scams
These new features are intended to provide teens with greater context regarding the accounts they interact with, and to assist them in identifying potential scams, according to Meta.
The company noted that teens are actively utilizing existing safety features. In June alone, users blocked accounts 1 million times and reported another 1 million instances after receiving a safety notice.
Nudity Protection Filter Effectiveness
Meta also provided an update on its nudity protection filter, stating that 99% of users, including teenagers, have chosen to keep it enabled. Over 40% of blurred images received in direct messages remained blurred last month.
Related Posts

Spotify's AI Prompted Playlists: Personalized Music is Here

YouTube TV to Offer Genre-Based Plans | Cord Cutter News

Google Tests AI Article Overviews in Google News

Amazon Updates Copyright Protection for Kindle Direct Publishing

ChatGPT Tops US App Charts in 2025 | AI News
