LOGO

EC Calls for Law as Tech Giants Slow Hate Speech Removal

October 7, 2021
EC Calls for Law as Tech Giants Slow Hate Speech Removal

Tech Platforms Show Declining Performance in Removing Illegal Hate Speech

Recent assessments indicate that major technology companies are demonstrating diminished effectiveness in the removal of unlawful hate speech from their platforms, despite a voluntary agreement with the European Union.

The EU Commission’s sixth evaluation report concerning the Code of Conduct on combating illegal hate speech reveals a “mixed picture.” Platforms are currently reviewing 81% of reported content within 24 hours and are removing approximately 62.5% of flagged material.

Lower Removal Rates Compared to Previous Years

These figures represent a decline when contrasted with the averages recorded in both 2019 and 2020, as highlighted by the Commission’s analysis.

The self-regulatory initiative was initially launched in 2016, with commitments from companies like Facebook, Microsoft, Twitter, and YouTube to remove hate speech violating their community standards within a 24-hour timeframe.

Subsequently, additional platforms—including Instagram, Google+, Snapchat, Dailymotion, Jeuxvideo.com, TikTok, and LinkedIn—also adopted the code’s principles.

Performance Falls Short of Expectations

While initial promises were ambitious, the actual performance of these platforms has frequently failed to meet stated goals. A previous trend of improvement has now either halted or reversed, according to the Commission’s findings, with Facebook and YouTube among those showing reduced performance.

tech giants’ slowing progress on hate speech removals underscores need for law, says ecInitial Focus on Terrorist Content

A primary impetus for establishing the code five years ago was the concern surrounding the proliferation of terrorist content online. Lawmakers aimed to exert rapid pressure on platforms to accelerate the removal of hate-promoting materials.

However, the EU has since enacted specific legislation addressing this issue: In April, a law mandating the removal of terrorist content within one hour was adopted.

The Digital Services Act

EU legislators have also proposed a comprehensive overhaul of digital regulations, expanding platform requirements regarding the handling of illegal content and goods.

The proposed Digital Services Act (DSA) has not yet been finalized, meaning the current self-regulatory code remains in effect—for the time being.

The Commission has indicated its intention to discuss the code’s future with participating companies, particularly in light of the “upcoming obligations and the collaborative framework” outlined in the DSA proposal. It remains uncertain whether the code will be discontinued or reinforced as a complement to the new legal framework.

Disinformation and Voluntary Codes

Regarding disinformation, the EU also employs a voluntary code to encourage the tech industry to combat the spread of false or misleading content. The Commission intends to maintain voluntary obligations while simultaneously strengthening measures and linking compliance—especially for larger platforms—to the legally binding DSA.

The stagnation in platforms’ hate speech removal rates suggests the voluntary approach may have reached its limitations. Alternatively, platforms may be reducing their efforts while awaiting the specific legal requirements of the DSA.

The Commission acknowledges that while some companies have experienced worsening results, others have shown improvement. However, this inconsistent performance highlights a key drawback of a non-binding code.

Insufficient User Feedback Remains a Concern

EU lawmakers have also identified “insufficient feedback” to users—through notifications—as a persistent “main weakness” of the code, mirroring findings from previous monitoring rounds. This reinforces the perceived need for legally enforceable standards, as proposed within the DSA, to standardize reporting procedures.

Věra Jourová, the Commission’s VP for values and transparency, emphasized the need for continued vigilance and the limitations of voluntary agreements, stating that the Digital Services Act will provide “strong regulatory tools to fight against illegal hate speech online.”

Didier Reynders, commissioner for Justice, added that companies “cannot be complacent” and must address any negative trends promptly, emphasizing the importance of protecting democratic spaces and fundamental user rights. He expressed confidence that the swift adoption of the DSA will help resolve existing gaps, such as insufficient transparency and user feedback.

Key Findings from the Monitoring Exercise

Additional findings from the monitoring of illegal hate speech takedowns include:

  • Removal rates varied based on the severity of the hateful content. 69% of content advocating for murder or violence against specific groups was removed, while 55% of content containing defamatory language or imagery targeting certain groups was removed. These figures decreased from 83.5% and 57.8% respectively in 2020.
  • IT companies provided feedback on 60.3% of received notifications, a decrease from the previous monitoring exercise (67.1%).
  • Sexual orientation was the most frequently reported basis for hate speech (18.2%), followed by xenophobia (18%) and anti-Gypsyism (12.5%).

The Commission also noted that, for the first time, signatories provided “detailed information” regarding measures taken to counter hate speech beyond the monitoring exercise, including automated content detection and removal systems.

#hate speech#tech giants#European Commission#online regulation#content moderation