LOGO

Instagram DM Filtering: New Tools to Block Abuse

April 21, 2021
Instagram DM Filtering: New Tools to Block Abuse

Addressing Harassment: New Tools on Instagram

For a long time, Facebook and its associated applications have been actively working to improve the management – and ultimately, the elimination – of bullying and harassment occurring on their platforms. This has involved utilizing both algorithmic solutions and human moderation. Today, Instagram is announcing a set of new features designed to enhance user safety.

Enhanced Direct Message Protection

Firstly, a new feature is being introduced to provide users with greater control over their direct messages, specifically within message requests. This system employs a collection of words, phrases, and emojis that potentially indicate abusive content. The system also accounts for common misspellings of these terms, often used to circumvent existing filters.

Proactive Blocking of Accounts

Secondly, Instagram is empowering users to proactively block individuals, even if those individuals attempt to contact them using newly created accounts. This addresses a common tactic used by those engaging in harassment.

The account blocking feature will be available worldwide in the coming weeks, according to Instagram. The filtering of abusive DMs will initially launch in the U.K., France, Germany, Ireland, Canada, Australia, and New Zealand within the same timeframe, with broader availability planned for the following months.

Platform-Specific Rollout

It’s important to note that these features are currently exclusive to Instagram. They are not being implemented on Messenger or WhatsApp, Facebook’s other popular messaging applications. A spokesperson indicated that Facebook intends to bring the features to Messenger later this year, but there are no current plans for WhatsApp.

User Control and Privacy

Instagram’s new DM filtering feature – which does not scan messages, but rather filters based on a predefined list – relies on a compilation of words and emojis. This list is developed in collaboration with anti-discrimination and anti-bullying organizations, and users can also add their own terms. However, the feature must be actively enabled by the user.

This approach prioritizes user autonomy and privacy. As a spokesperson explained, “We want to respect people’s privacy and give people control over their experiences in a way that works best for them.” This mirrors the functionality of Instagram’s existing comment filters, accessible through Settings>Privacy>Hidden Words.

Building In-House Moderation Tools

Several third-party companies, such as Sentropy and Hive, are developing content moderation tools to detect harassment and hate speech. However, Facebook has chosen to build these tools internally. This strategy is being followed with the current Instagram updates.

Automated System with Human Oversight

The system operates automatically, but Facebook does review any content that is reported by users. While user interaction data isn’t retained, reported terms are used to refine and expand the database of keywords that trigger content blocking, deletion, and account reporting.

Combating Account Proliferation

Addressing the issue of repeat offenders creating multiple accounts to bypass blocks has been a long-standing challenge. Facebook’s harassment policies already prohibit repeatedly contacting individuals who have expressed disinterest. The company also prohibits the creation of new accounts by those whose previous accounts were disabled for violating its rules.

Instagram’s design, with its open messaging system – separating messages from known contacts from those from all users – contributes to the problem. The platform encourages broader connection, leading users to check their message requests more frequently than they might check spam folders in email.

Ongoing Challenges and Regulatory Scrutiny

Effectively managing harassment remains a continuous effort, often described as a “whack-a-mole” scenario. Users are increasingly demanding more robust solutions. Furthermore, Facebook faces growing scrutiny from regulators, making the effective management of harassment a critical priority.

The company must demonstrate progress in this area, or risk external intervention.

#instagram#dm filtering#abusive messages#block users#online safety#social media