LOGO

TikTok Updates Safety Center - New Resources & Challenge Safety

November 17, 2021
TikTok Updates Safety Center - New Resources & Challenge Safety

TikTok Addresses Concerns Over Dangerous Online Challenges

TikTok has faced increasing criticism regarding the prevalence of perilous viral “challenges” on its platform. These challenges have, in severe instances, resulted in significant injuries or even fatalities – exemplified by the “blackout challenge,” which prompted regulatory intervention in Italy aimed at restricting access for underage users. Recently, the app has also garnered attention for challenges encouraging disruptive behavior towards educators and the destruction of school property.

Downplaying Involvement and Recent Research

As the possibility of further regulation arises, TikTok is now publicizing the findings of its research into viral challenges and hoaxes, alongside detailing the steps it is undertaking to address these issues. Previously, TikTok frequently attempted to minimize its role in the spread of viral challenges.

For instance, the company refuted claims that the “slap a teacher” dare originated on TikTok. Following a fatality linked to the blackout challenge, TikTok asserted it had found no evidence of such a challenge on its platform, a claim reiterated during a Senate hearing focused on minor safety on social media. However, Senator Blackburn (R-TN) informed TikTok’s representative that her staff had discovered “pass out videos” and other troubling content.

Commissioned Research and Expert Input

Today, TikTok is releasing the results of research it commissioned to investigate harmful challenges and hoaxes. The company initiated a global project several months ago, encompassing a survey of over 10,000 adolescents, parents, and educators across Argentina, Australia, Brazil, Germany, Italy, Indonesia, Mexico, the U.K., the U.S., and Vietnam.

Furthermore, an independent safeguarding agency, Praesidio Safeguarding, was tasked with producing a report outlining the findings and recommendations. A panel of 12 prominent teen safety experts also reviewed the report and contributed their insights. TikTok collaborated with Dr. Richard Graham, a clinical child psychiatrist specializing in adolescent development, and Dr. Gretchen Brion-Meisels, a behavioral scientist focused on risk prevention in adolescence, to provide additional guidance.

Understanding Adolescent Risk-Taking

The report’s data is significant, highlighting how social media can foster harmful content, such as viral challenges, due to its widespread use among young people. Adolescents exhibit a greater propensity for risk-taking, influenced by their stage of psychological development.

As Dr. Graham explained, puberty is a crucial period preparing children for adulthood. It involves “massive brain development,” he stated.

“Current focus is on understanding adolescent behavior, as judgment centers are being refined in preparation for more complex decision-making and thought processes,” he clarified.

Brain Development and Peer Influence

Dr. Graham noted that young people’s brains are evolving in areas like abstract thinking, emotional recognition, and relationship understanding. Concurrently, their curiosity about the world expands, potentially leading to engagement in riskier activities and a desire for peer validation. While some activities, like watching scary movies, are relatively harmless, others may involve challenges that push boundaries.

“They may sometimes overestimate their capabilities, leading to experiences that can be traumatizing, but their underlying goal is growth,” he observed.

Social Approval and Misguided Safety Assessments

Viral challenges often appeal to teens’ need for approval, generating likes and views. However, their assessment of challenge safety is flawed; they typically rely on additional videos or advice from friends. Parents and teachers often avoid discussing challenges, fearing increased interest.

Participation Rates and Perceptions

The study revealed that most teens do not participate in the most dangerous challenges. Globally, 21% of teens engaged in challenges, with only 2% participating in risky ones. A mere 0.3% took part in challenges they deemed “really dangerous.” Most perceived challenge participation as neutral (54%) or positive (34%), with only 11% viewing it negatively. 64% reported a positive impact on their friendships and relationships.

Hoaxes and the Search for Verification

The research also examined hoax challenges, such as Blue Whale and Momo, which promote the idea of a malicious actor directing children towards harmful activities, including self-harm or suicide. Sixty-one percent of teens search for more information on hoaxes to verify their authenticity, though confusion often persists. Teens suspect those reposting hoaxes do so for likes and views (62%) or believe the hoax is genuine (60%). Forty-six percent of teens exposed to hoaxes sought support or advice, indicating a need for resources to help them understand such material.

TikTok’s Response and Transparency

Despite the research highlighting the need for continued improvements in user safety, TikTok’s initial response appears limited.

The company announced the addition of a new section to its Safety Center dedicated to challenges and hoaxes, and is revising warning labels for hoaxes related to suicide or self-harm, based on the researchers’ recommendations.

Given the modest nature of these changes, it’s noteworthy that TikTok livestreamed a presentation about its research to the media for an hour prior to today’s announcement, and distributed the full 38-page report to reporters. This suggests an effort to distinguish itself from Facebook, where internal research revealing problems was concealed until whistleblower Frances Haugen disclosed thousands of documents.

A Moderation Challenge

TikTok aims to be seen as actively involved in the research process, but the implemented changes do not fully address the underlying problem. Ultimately, harmful content represents a moderation challenge inherent in the nature of social media itself. Eliminating such content would necessitate a system that doesn’t prioritize shocking or outrageous content for likes and views – a design not typical of most social media platforms.

Enhanced Content Monitoring

TikTok also stated it will monitor for sudden increases in violating content, including potentially dangerous behavior, associated with hashtags. For example, if a hashtag like #FoodChallenge, typically used for recipes, experienced a surge in policy-violating content, the moderation team would investigate and take action.

In essence, TikTok asserts it will enhance content moderation – a function users assumed was already in place.

Research Report Availability

The complete research report is available below.

Praesidio Report – Exploring Effective Prevention Education Responses to Dangerous Online Challenges by TechCrunch on Scribd

#TikTok#safety center#online safety#harmful challenges#social media safety#TikTok updates