LOGO

Hate Speech in Online Ads: German Election Study

February 22, 2025
Hate Speech in Online Ads: German Election Study

Hate Speech Ads Approved on Meta and X Before German Elections

Recent investigations by Eko, a nonprofit focused on corporate responsibility, reveal that Meta and X permitted the publication of advertisements containing violent, hateful rhetoric directed at Muslims and Jewish people in Germany.

This occurred during the period leading up to the nation’s federal elections. The research highlights a significant failure in the ad review processes of these major social media platforms.

Testing the Platforms’ Ad Review Systems

Researchers from Eko deliberately submitted advertisements designed to gauge the effectiveness of the platforms’ safeguards against hate speech. These ads contained explicitly hateful and violent messages targeting minority groups.

The timing of the tests was strategic, coinciding with a period where immigration policy had become a prominent topic in German political debate.

Examples of Approved Hate Speech

The submitted advertisements included deeply offensive content, such as:

  • Anti-Muslim slurs.
  • Advocacy for the imprisonment of immigrants in concentration camps.
  • Calls for the mass extermination of immigrants through gassing.
  • AI-generated images depicting the destruction of mosques and synagogues through arson.

Despite the egregious nature of this content, the majority of the test advertisements were approved remarkably quickly – within hours of submission in mid-February.

Implications for the German Federal Elections

Germany’s federal elections are scheduled for Sunday, February 23. The approval of these hateful ads raises serious concerns about the potential for online disinformation and incitement to violence during this critical democratic process.

The findings underscore the need for more robust and effective content moderation policies on social media platforms, particularly in the context of elections.

Hate Speech Advertisements Authorized for Publication

According to Eko, X granted approval for all ten hate speech advertisements submitted by its researchers shortly before the scheduled date of the federal election. Conversely, Meta authorized half of the submissions – five advertisements – for distribution on Facebook, with potential inclusion on Instagram, while rejecting the remaining five.

Meta’s rationale for rejecting the five advertisements centered on potential risks of influencing the vote through political or social sensitivities.

However, the five advertisements Meta did approve contained explicitly violent hate speech. These included dehumanizing language comparing Muslim refugees to a “virus,” “vermin,” or “rodents,” falsely accusing Muslim immigrants of being “rapists,” and advocating for their sterilization, burning, or gassing. An advertisement calling for the arson of synagogues to “stop the globalist Jewish rat agenda” was also approved by Meta.

Notably, Eko reports that none of the AI-generated images used in these hate speech advertisements were flagged as artificially created. Despite this, and despite Meta’s policy requiring disclosure of AI-generated imagery in ads concerning social issues, elections, or politics, half of the ads were still approved.

X, in contrast, approved all ten of the aforementioned hateful advertisements, alongside an additional five containing comparable violent hate speech directed at both Muslims and Jews.

These further approved advertisements featured attacks on “rodent” immigrants, falsely claiming they are “flooding” the country to undermine democracy, and employed an antisemitic slur alleging Jews are fabricating climate change concerns to damage European industry and gain financial advantage.

This latter advertisement was paired with AI-generated imagery depicting a group of shadowy figures seated around a table laden with gold bars, featuring a Star of David on the wall – imagery heavily reliant on antisemitic stereotypes.

Another advertisement approved by X directly attacked the SPD, Germany’s current governing center-left party, with a false assertion that they intend to accept 60 million Muslim refugees. The ad then attempted to incite a violent reaction. X also scheduled an advertisement promoting the idea that “leftists” favor “open borders” and calling for the extermination of Muslim “rapists.”

Elon Musk, the owner of X, has actively intervened in the German election through his personal use of the platform, which boasts nearly 220 million followers. In a December tweet, he urged German voters to support the far-right AfD party to “save Germany.” He also hosted a livestream featuring Alice Weidel, the AfD’s leader, on X.

Eko’s researchers deactivated all test advertisements before any approved submissions could be displayed, preventing users from being exposed to the violent hate speech.

The organization asserts that these tests reveal significant deficiencies in the ad platforms’ content moderation procedures. In the case of X, it remains unclear if any ad moderation is being conducted, given the swift approval of all ten violent hate speech advertisements.

The findings also raise concerns that the ad platforms may be generating revenue through the dissemination of violent hate speech.

The EU’s Digital Services Act Under Scrutiny

Recent assessments by Eko indicate that both major platforms are failing to adequately enforce prohibitions against hate speech within their advertising content, despite publicly stated policies to the contrary. Notably, Eko’s prior investigation of Meta in 2023, conducted before the full implementation of new EU online governance regulations, yielded similar conclusions – raising questions about the effectiveness of the current regulatory framework.

An Eko representative communicated to TechCrunch that the organization’s results point to persistent deficiencies in Meta’s AI-powered ad moderation systems, even with the Digital Services Act (DSA) fully operational.

The representative further stated that, instead of enhancing its ad review procedures or hate speech policies, Meta seems to be reversing course. This observation is supported by the company’s recent announcement regarding the scaling back of moderation and fact-checking initiatives, which Eko views as a clear indication of “active regression” and a potential conflict with DSA stipulations concerning systemic risks.

Eko has submitted its latest findings to the European Commission, the body responsible for overseeing DSA enforcement concerning these social media platforms. The organization also shared the results directly with both companies, but received no response.

The EU currently has ongoing DSA investigations into both Meta and X, focusing on concerns related to election security and the presence of unlawful content. However, the Commission has not yet reached a conclusion in these proceedings. In April, the Commission expressed suspicions regarding Meta’s insufficient moderation of political advertising.

A preliminary assessment of its DSA investigation into X, revealed in July, indicated potential failures to comply with the regulation’s ad transparency requirements. The comprehensive investigation, initiated in December 2023, also addresses risks associated with illegal content, but the EU has yet to publish findings on the majority of the probe, despite it being over a year underway.

Violations of the DSA can result in financial penalties of up to 6% of a company’s global annual revenue. Severe and ongoing non-compliance could even lead to temporary restrictions on access to the violating platforms within the EU.

Currently, the EU is still deliberating on the Meta and X investigations, meaning any potential DSA sanctions remain uncertain.

As German voters prepare to head to the polls, a growing body of research from civil society organizations suggests that the EU’s flagship online governance regulation has not effectively protected the democratic process in a major EU economy from various technology-related threats.

This week, Global Witness published the results of tests conducted on the algorithmic “For You” feeds of X and TikTok in Germany. The findings suggest a bias in these platforms towards promoting content from the AfD party compared to other political groups. Researchers have also alleged that X has restricted data access, hindering efforts to assess election security risks leading up to the German election – access that the DSA is intended to guarantee.

Eko’s spokesperson emphasized the need for decisive action from the European Commission, stating, “The European Commission has taken important steps by opening DSA investigations into both Meta and X, now we need to see the Commission take strong action to address the concerns raised as part of these investigations.”

The spokesperson further added, “Our findings, alongside mounting evidence from other civil society groups, show that Big Tech will not voluntarily address issues on their platforms. Meta and X continue to permit the widespread dissemination of illegal hate speech, incitement to violence, and election disinformation, despite their legal obligations under the DSA.” (The spokesperson’s name has been withheld to safeguard against harassment.)

“Regulators must implement robust measures, including both DSA enforcement and proactive steps like pre-election mitigation. This could involve temporarily disabling profiling-based recommender systems before elections and implementing other emergency protocols to prevent the algorithmic amplification of problematic content, such as hateful material, during election periods.”

The campaign group also expressed concern that the EU is facing pressure from the Trump administration to adopt a more lenient approach to regulating Big Tech. They suggest, “In the current political climate, there’s a real danger that the Commission doesn’t fully enforce these new laws as a concession to the U.S.”

Update: Lara Hesse, a Meta spokesperson, provided a statement in response to Eko’s findings via email.

“These advertisements contravene our policies. They were not published, and our systems identified and deactivated the advertiser’s Page prior to our awareness of this research,” the statement explained. “Our ads review process incorporates multiple layers of analysis and detection, both before and after an advertisement is published. We have undertaken extensive measures in alignment with the DSA and continue to invest substantial resources to safeguard elections.”

#Meta#hate speech#antisemitism#Islamophobia#online ads#German election