LOGO

Verified Moderation: The Future of Online Safety

March 23, 2021
Verified Moderation: The Future of Online Safety

The Evolving Landscape of Online Identity and Accountability

From the internet’s inception, determining the identity of individuals online has presented both significant challenges and compelling intrigue. In the early phases of social media and online forums, the prevalence of anonymous usernames allowed users to adopt any persona they desired.

While this anonymity offered a sense of freedom, its drawbacks quickly surfaced. Individuals with malicious intent exploited this anonymity to victimize others, engage in harassment, and disseminate false information without facing repercussions.

The Pillars of Content Moderation

For many years, discussions surrounding content moderation have centered on two primary components. The first concerns the establishment of rules – defining acceptable and prohibited content, and determining the criteria for ambiguous cases. The second focuses on enforcement – utilizing both human moderators and artificial intelligence to identify and flag inappropriate or illegal material.

Although these elements remain crucial to any moderation strategy, they primarily address issues after they occur. A further, equally vital tool often receives insufficient attention: verification.

Verification: Beyond the Blue Checkmark

Many associate verification solely with the “blue checkmark,” a symbol of status often granted to prominent figures and celebrities. However, verification is increasingly recognized as a powerful tool in moderation efforts, particularly in combating harassment and hate speech.

This symbol signifies more than just importance; it confirms a user’s stated identity, providing a means to hold individuals accountable for their online actions.

The Proliferation of Fake Accounts

Social media platforms currently grapple with a surge in fake accounts, exemplified by recent instances like impersonations on platforms such as Clubhouse. Bots and fabricated profiles rapidly spread misinformation, often outpacing the ability of moderators to remove them.

In response, Instagram has begun implementing enhanced verification procedures to address this issue. By confirming users’ genuine identities, Instagram aims to better detect and hold accountable accounts attempting to deceive their followers, thereby enhancing community safety.

Legal and Ethical Imperatives for Verification

The need for verification extends beyond curbing the spread of problematic content; it also assists organizations in maintaining legal compliance.

Following revelations of illegal content on Pornhub, the platform prohibited uploads from unverified users and removed all content originating from unverified sources – representing over 80% of its hosted videos. Subsequently, it implemented new verification measures to prevent similar issues from arising.

This situation serves as a cautionary example for all companies. Implementing verification from the outset would have significantly improved their ability to identify and exclude malicious actors.

A Multifaceted Approach to Verification

It’s important to understand that verification isn’t a singular solution, but a combination of methods that must be applied dynamically. Malicious actors are adaptable and constantly refine their techniques to bypass security measures.

Relying on a single verification method, such as a photo ID, may seem adequate, but it is relatively easy for determined fraudsters to circumvent.

At Persona, we are observing increasingly sophisticated fraud attempts, including the use of celebrity images and data, intricate photo editing of IDs, and even the creation of deepfakes for live selfie verification.

Therefore, verification systems must consider multiple data points, including actively collected information (like a photo ID), passive signals (IP address or browser fingerprint), and third-party data sources (phone and email risk lists). Combining these elements ensures that a stolen ID will be flagged due to inconsistencies in location or behavioral patterns, prompting further investigation.

Combating Coordinated Disinformation

This holistic verification approach not only deters individual abusers but also prevents them from repeatedly creating new accounts under different usernames and email addresses – a common tactic of banned trolls and account abusers.

Furthermore, a multisignal approach can effectively address a larger challenge for social media platforms: coordinated disinformation campaigns. Dealing with groups of malicious actors is akin to battling a Hydra, where eliminating one threat results in the emergence of others.

However, a comprehensive verification system can identify these groups based on shared characteristics, such as location. While they will continue to seek new entry points, multifaceted verification tailored to the end user can limit their disruptive influence.

Expanding the Scope of Verification

Historically, identity verification systems like Jumio or Trulioo were designed for specific sectors, such as financial services. However, there is a growing demand for industry-agnostic solutions like Persona to address emerging verification use cases. Virtually any industry operating online can benefit from verification, even those like social media, where financial transactions aren’t necessarily involved.

The question is not if verification will become integral to solutions for challenges like moderation, but when. The necessary technology and tools are available, and it is now up to social media platforms to prioritize its implementation.

#moderation#online safety#verification#content moderation#trust and safety