YouTube Recommendations Still Problematic, Study Finds

YouTube's Recommendation Algorithm Under Scrutiny
For a considerable period, YouTube’s video recommendation system has faced accusations of contributing to various societal problems. The core claim is that the algorithm prioritizes content designed to maximize user engagement, even if that content includes hate speech, political extremism, or disinformation, all in the interest of increasing advertising revenue.
While Google, YouTube’s parent company, has occasionally addressed public concerns regarding these problematic recommendations, their responses have been intermittent. These have typically involved minor policy adjustments or the removal of individual accounts spreading harmful content.
However, it remains questionable whether these actions have fundamentally altered the platform’s tendency to promote sensational and detrimental clickbait.
New Research Highlights Persistent Issues
Recent research conducted by Mozilla supports the idea that YouTube’s AI continues to amplify low-quality, divisive, and misleading content. This content often aims to attract attention by provoking strong emotional reactions, fostering polarization, or disseminating false information.
This suggests that YouTube’s issues with recommending problematic content are not isolated incidents, but rather a systemic problem. It’s believed to be a consequence of the platform’s strong focus on maximizing views for advertising purposes.
Mozilla’s study indicates that Google has been effective in mitigating criticism through superficial claims of reform.
The Role of Algorithmic Opacity
A key factor in Google’s ability to deflect criticism is the lack of transparency surrounding the recommendation engine’s inner workings. The algorithmic processes and associated data remain largely hidden from public scrutiny, justified under the guise of commercial secrecy.
However, potential regulatory changes, particularly in Europe, could compel companies to open up these “black boxes” and allow for greater oversight.
Proposed Solutions for Improvement
Mozilla advocates for a multi-faceted approach to address YouTube’s algorithmic issues. This includes:
- Transparency laws that require companies to disclose information about their AI systems.
- Protection for independent researchers to enable them to analyze the impact of algorithms.
- Empowering users with greater control over recommendations, such as the option to disable personalized suggestions.
These measures are intended to curb the most harmful aspects of YouTube’s AI and promote a more responsible platform environment.
Concerns Regarding YouTube Recommendations Have Emerged
To obtain data on the specific recommendations delivered to YouTube users – information Google doesn’t typically share with external researchers – Mozilla employed a crowdsourced method. This involved a browser extension, RegretsReporter, allowing users to report videos they regretted watching.
The tool facilitates the creation of reports detailing the recommended videos, alongside previous viewing history, to understand how YouTube’s recommendation system operates. Or, potentially, how it is failing to meet user expectations.
Volunteers contributing data to Mozilla’s research reported a diverse range of “regrets,” encompassing videos promoting COVID-19 misinformation, political inaccuracies, and unsuitable content for children. The most frequently reported categories included misinformation, violent content, hate speech, and scams.
A significant majority – 71% – of regret reports originated from videos directly recommended by YouTube’s algorithm, highlighting the AI’s role in presenting problematic content to users.
The research also revealed that recommended videos were 40% more likely to be reported than videos users actively searched for.
Mozilla identified instances where the recommendation algorithm presented content violating YouTube’s community guidelines or unrelated to the user’s previous viewing activity, indicating a clear system failure.
Regrettable content appears to be a more pronounced issue for YouTube users in non-English speaking countries. Mozilla found YouTube regrets were 60% higher in countries where English isn’t the primary language, with Brazil, Germany, and France exhibiting particularly high levels of regretful viewing.
Pandemic-related regrets were also more common in non-English speaking countries, a concerning finding given the ongoing global health crisis.
The crowdsourced study – which Mozilla describes as the largest of its kind focusing on YouTube’s recommender algorithm – utilized data from over 37,000 users who installed the extension. However, the analysis directly draws upon reports from a subset of 1,162 volunteers, representing 91 countries, who flagged 3,362 regrettable videos.
These reports were compiled between July 2020 and May 2021.
Mozilla defines a YouTube “regret” as a user-reported negative experience, making it a subjective measure. However, Mozilla argues this “people-powered” approach prioritizes the experiences of internet users, particularly those from marginalized or vulnerable communities, over solely applying legal definitions of harm.
Brandi Geurkink, Mozilla’s senior manager of advocacy and the project’s lead researcher, explained the research aims were to investigate and confirm anecdotal stories of users falling down the “YouTube rabbit hole” and to identify emerging trends.
“My primary feeling throughout this work was shock that many of our expectations were confirmed,” Geurkink stated. “Despite the study’s limitations in scope and methodology, the data clearly demonstrated certain patterns.”
“For example, the algorithm recommending content that later violates its own policies, and the worse experiences reported by non-English-speaking users – these are issues frequently discussed anecdotally. But it was striking to see them so clearly reflected in our data.”
Mozilla’s research uncovered numerous examples of reported content likely breaching YouTube’s community guidelines, including hate speech and debunked misinformation.
The reports also highlighted a significant amount of what YouTube might categorize as “borderline content” – material that’s difficult to classify, potentially low-quality, and may skirt the boundaries of acceptability, making it harder for algorithmic moderation systems to address.
However, the report notes that YouTube doesn’t provide a clear definition of “borderline content,” hindering verification of whether reported regrets fall into this category.
A recurring theme in the research is the difficulty of independently studying the societal impact of Google’s technology and processes. Mozilla’s report also accuses Google of responding to YouTube criticism with “inertia and opacity.”
Critics have long alleged that YouTube’s parent company profits from engagement generated by harmful disinformation and hateful content, allowing “AI-generated bubbles of hate” to flourish and exposing users to extremist views while shielding its content business under a user-generated content framework.
“Falling down the YouTube rabbit hole” has become a common metaphor for users being drawn into the web’s darkest corners. This user reprogramming occurs openly through AI-generated suggestions that guide individuals along conspiracy theory paths within a mainstream platform.
As early as 2017, European politicians accused YouTube’s algorithm of automating radicalization during heightened concerns about online terrorism and the spread of ISIS content on social media.
However, obtaining concrete data to support anecdotal reports of users being radicalized by extremist content or conspiracy theories on YouTube has remained challenging.
Guillaume Chaslot, a former YouTube insider, has attempted to shed light on the platform’s proprietary technology through his algotransparency project.
Mozilla’s crowdsourced research complements these efforts by presenting a broad – and largely problematic – picture of the YouTube AI based on user-reported experiences.
While external sampling of platform data can’t provide a complete picture, and self-reporting may introduce biases, the difficulty of studying Big Tech’s “black boxes” is a central point of the research, as Mozilla advocates for platform oversight.
The report recommends “robust transparency, scrutiny, and user control of recommendation algorithms,” arguing that without oversight, YouTube will continue to expose people to damaging content.
The lack of transparency surrounding YouTube’s operations is evident in other report details. For instance, Mozilla found that approximately 9% of recommended regrets – nearly 200 videos – had been removed, for reasons that weren’t always clear.
These videos collectively amassed 160 million views before removal.
The research also indicated that regretful videos tend to perform well on the platform.
Reported regrets received 70% more views per day than other videos watched by the volunteers, supporting the argument that YouTube’s engagement-optimizing algorithms prioritize triggering or misinforming content over quality content due to its ability to generate clicks.
While beneficial for Google’s advertising revenue, this is detrimental to democratic societies that value truthful information, constructive debate, and civic cohesion.
Without legally enforced transparency and regulatory oversight, tech giants will likely continue to prioritize profits over societal well-being.
Mozilla’s report also highlights instances where YouTube’s algorithms operate based on logic unrelated to the content itself. In 43.6% of cases where researchers had data on a participant’s viewing history before reporting a regret, the recommendation was completely unrelated to the previous video.
Examples include a user watching videos about the U.S. military being recommended a misogynistic video, a video about software rights leading to a recommendation about gun rights, and an Art Garfunkel music video followed by a political video alleging bias in a Trump debate moderator.
These instances appear to be “AI brain farts” at best, and potentially demonstrate a bias towards right-leaning political content.
Geurkink stated the most concerning finding was the prevalence of misinformation on the platform, and the disproportionately negative experiences reported by non-English-speaking users, which she believes doesn’t receive enough attention.
Google responded to Mozilla’s report by welcoming research into YouTube and exploring options for external researchers to study the platform, without providing specifics. It also questioned Mozilla’s definition of “regrettable” content and claimed its user surveys indicate satisfaction with its recommendations.
Google highlighted its recent disclosure of a “violative view rate” (VVR) metric, revealing that 0.16%-0.18% of YouTube views come from content violating its policies. It attributes a 70% reduction in this rate since 2017 to investments in machine learning.
However, Geurkink noted that the VVR is limited without data on the algorithm’s role in amplifying violative content. She suggested the VVR may be a distraction.
Google also mentioned a 2019 change to its recommender algorithm to reduce amplification of conspiracy-theory content, which it claims resulted in a 70% drop in watch time for this type of content. However, the lack of a fixed baseline makes this percentage drop difficult to interpret.
Notably, Google’s response doesn’t address the negative experiences reported by users in non-English-speaking markets. Geurkink pointed out that many of Google’s mitigating measures are geographically limited, often starting in English-speaking markets like the U.S. and U.K.
A 2019 change to reduce conspiracy-theory amplification in the U.S. wasn’t expanded to the U.K. until months later.
“YouTube has, for the past few years, only been reporting on its progress of recommendations of harmful or borderline content in the U.S. and in English-speaking markets,” she said. “And there are very few people questioning that – what about the rest of the world? To me that is something that really deserves more attention and more scrutiny.”
Mozilla advocates for greater transparency and data access to enable researchers to study AI technologies effectively. The EU’s Digital Services Act offers a promising avenue for increased transparency, but Mozilla believes it needs to be strengthened to address recommender systems specifically.
Geurkink suggested a “data access framework” within the law would allow vetted researchers to access the information needed to study powerful AI technologies, rather than relying on a detailed list of transparency requirements.
The EU’s draft AI regulation takes a risk-based approach, but it’s unclear whether YouTube’s recommender system would fall under the more closely regulated categories.
“An earlier draft of the proposal talked about systems that manipulate human behavior, which is essentially what recommender systems are,” Geurkink noted. “So it was sort of difficult to understand exactly where recommender systems would fall into that.”
Ultimately, leaving it to platforms to self-regulate is insufficient. Greater transparency, regulatory oversight, and enforcement are necessary to address the harmful impacts of AI-driven recommendation systems.
Related Posts

Google's New AI Agent vs. OpenAI GPT-5.2: A Deep Dive

Disney Cease and Desist: Google Faces Copyright Infringement Claim

OpenAI Responds to Google with GPT-5.2 After 'Code Red' Memo

Waymo Baby Delivery: Birth in Self-Driving Car

Google AI Leadership: Promoting Data Center Tech Expert
