TikTok Bias in Germany: Far-Right Political Leanings Found

Social Media Algorithms Exhibit Right-Wing Bias in Germany
Research conducted by Global Witness indicates that recommendation algorithms on TikTok and X demonstrate significant political bias towards the far-right in Germany. This bias has been observed leading up to the upcoming federal election on Sunday.
Analysis of Algorithmically Sorted Feeds
The non-governmental organization (NGO) performed an analysis of content presented to new users through algorithmically curated “For You” feeds. The investigation revealed a pronounced tendency on both platforms to amplify content supporting the far-right AfD party.
TikTok Shows Highest Level of Bias
Global Witness’ testing pinpointed the most substantial bias on TikTok. A remarkable 78% of the political content algorithmically suggested to test accounts – content originating from accounts not previously followed – expressed support for the AfD.
This figure significantly surpasses the party’s current polling numbers, which indicate approximately 20% support among German voters.
X Also Displays a Right-Wing Skew
On X, the research team discovered that 64% of the algorithmically recommended political content favored the AfD party.
Disproportionate Exposure to Right-Leaning Content
The study assessed the platforms’ algorithms for broader left- or right-leaning biases. Results suggest that German social media users who do not actively seek out political content are exposed to right-leaning material more than twice as frequently as left-leaning content as the federal elections approach.
Comparative Bias Across Platforms
TikTok exhibited the strongest right-wing inclination, displaying right-leaning content 74% of the time. X followed closely behind, showcasing such content 72% of the time.
Instagram Shows Moderate Right-Wing Lean
Meta’s Instagram was also subjected to testing over three trials conducted by the NGO. While it also demonstrated a rightward lean, the degree of political bias was less pronounced, with 59% of the political content categorized as right-wing.
- TikTok: 78% of recommended political content supported the AfD.
- X: 64% of recommended political content supported the AfD.
- Instagram: 59% of recommended political content was right-wing.
Examining Potential Political Bias in “For You” Recommendations
Researchers from several non-governmental organizations conducted tests to determine if the recommendation algorithms of major social media platforms exhibited a political leaning. The study involved creating three accounts each on TikTok and X, as well as an additional three on Instagram, which is owned by Meta. The goal was to observe the types of political content these platforms would prioritize for users demonstrating a neutral interest in political matters.
To simulate unbiased users, the test accounts followed the official accounts of Germany’s four largest political parties – the conservative CDU, the center-left SPD, the far-right AfD, and the left-leaning Greens. They also followed the personal accounts of each party’s leader: Friedrich Merz, Olaf Scholz, Alice Weidel, and Robert Habeck.
The researchers ensured consistent engagement by having each test account interact with the top five posts from every followed account. This included watching videos for a minimum of 30 seconds and fully exploring threads and images, as documented by Global Witness.
Subsequently, the content algorithmically presented to these accounts was collected and analyzed. The analysis revealed a significant tendency towards right-leaning content in the platforms’ recommendations.
“A key concern for us is the opacity surrounding the reasons behind the content suggestions we received,” explained Ellen Judson, a senior campaigner at Global Witness focusing on digital threats, in an interview with TechCrunch. “While our findings suggest a potential bias, the lack of transparency regarding how these recommender systems operate remains a significant issue.”
Judson further elaborated, “We understand that numerous signals are utilized, but the precise weighting of these signals, and the assessment of their potential to amplify risks or biases, are not clearly disclosed.”
“Our hypothesis is that this outcome is an unintended consequence of algorithms designed to maximize user engagement,” she stated. “When platforms initially intended for user engagement evolve into spaces for democratic discourse, a conflict arises between commercial goals and the public interest, as well as democratic principles.”
These results align with previous social media research conducted by Global Witness concerning recent elections in the U.S., Ireland, and Romania. Furthermore, other studies over the past few years have also indicated a rightward lean in social media algorithms, such as a research project focused on YouTube conducted last year.
As far back as 2021, an internal study conducted by Twitter – prior to its acquisition and rebranding by Elon Musk as X – demonstrated that its algorithms favored right-leaning content over left-leaning content.
Despite this evidence, social media companies generally deflect accusations of algorithmic bias. TikTok, after being presented with Global Witness’s findings, questioned the research methodology, asserting that conclusions about algorithmic bias could not be drawn from a limited number of test accounts. Judson noted their response was that the accounts did not accurately represent typical user behavior.
X did not provide a response to Global Witness’s findings. However, Musk has publicly expressed a desire for the platform to become a bastion of free speech. This stance, however, may serve as a justification for promoting a right-leaning perspective.
Notably, X’s owner has actively used the platform to support the AfD, urging German citizens to vote for the far-right party in upcoming elections. He also hosted a live-streamed interview with Alice Weidel, a prominent figure within the AfD, which significantly increased the party’s visibility. Musk maintains the most-followed account on X.
Examining Algorithmic Transparency
Judson emphasizes the significance of transparency, stating, “The transparency aspect is truly crucial.” Concerns have been raised regarding Elon Musk’s engagement with the AfD, including his posts and a livestream featuring Weidel. However, it remains unclear whether these interactions have triggered any alterations within the platform’s algorithms.
She further indicated that Global Witness has presented its findings to EU officials tasked with enforcing the bloc’s algorithmic accountability regulations for major platforms, hoping this data will prompt an investigation into potential biases.
Challenges in Studying Algorithms
Investigating the operational mechanisms of proprietary content-sorting algorithms presents considerable difficulties. Platforms generally safeguard these details, citing them as protected commercial secrets.
To address this, the European Union recently implemented the Digital Services Act (DSA) – a comprehensive online governance framework – aiming to enhance public interest research into systemic and democratic risks on prominent platforms like Instagram, TikTok, and X.
The DSA incorporates provisions designed to compel major platforms to increase transparency regarding their information-shaping algorithms and proactively address systemic risks that may emerge.
Implementation of the DSA
Despite the DSA taking effect for three major tech companies in August 2023, Judson points out that certain aspects are still awaiting full implementation.
Specifically, Article 40, intended to grant vetted researchers access to non-public platform data for studying systemic risks, has not yet been activated. This is due to the EU’s pending passage of the necessary delegated act to enact this portion of the law.
The EU’s current strategy relies heavily on platforms’ self-reporting of risks, followed by review by enforcement bodies. Judson suggests that initial risk reports from platforms may be limited in their disclosures, requiring time for enforcers to analyze them and request more comprehensive information if needed.
Currently, without improved access to platform data, researchers are unable to definitively determine whether inherent biases exist within mainstream social media.
“Civil society organizations are closely monitoring the availability of access for vetted researchers,” she states, expressing hope that this crucial component of the DSA will be operational this quarter.
Effectiveness and Caution
The regulation has yet to yield rapid results concerning social media and democratic risks. The EU’s cautious approach may prove insufficient to address algorithmically amplified threats effectively. However, the EU is also demonstrably careful to avoid any actions that could be construed as restricting freedom of expression.
The Commission is currently conducting investigations into all three social media companies implicated in the Global Witness research. While no enforcement actions have been taken specifically regarding election integrity, scrutiny of TikTok has recently increased, leading to a new DSA proceeding related to concerns about the platform’s role in potential Russian interference in Romania’s presidential election.
“We are requesting the Commission to investigate potential political bias,” Judson clarifies. “The platforms maintain there is none, but our findings suggest otherwise. We hope the Commission will utilize its expanded information-gathering capabilities to ascertain the truth and take appropriate action if bias is confirmed.”
The pan-EU regulation empowers enforcers to impose penalties of up to 6% of a company’s global annual turnover for violations, and even temporarily restrict access to platforms that fail to comply.
Related Posts

Instacart to Pay $60M to Settle FTC Deceptive Practices Claims

Apple App Store Japan: Now Open to Competition

AI Data Center Boom: Impact on Infrastructure Projects

Trump's AI Executive Order: A 'One Rulebook' Promise or Legal Limbo?
