LOGO

automattic, mozilla, twitter and vimeo urge eu to beef up user controls to help tackle ‘legal-but-harmful’ content

AVATAR Natasha Lomas
Natasha Lomas
Senior Reporter, TechCrunch
December 9, 2020
automattic, mozilla, twitter and vimeo urge eu to beef up user controls to help tackle ‘legal-but-harmful’ content

Automattic, Mozilla, Twitter, and Vimeo have collectively addressed an open letter to European Union legislators, advocating for careful consideration as the EU updates its digital regulations to avoid inadvertently restricting online freedom of expression.

The forthcoming Digital Services Act and Digital Markets Act are slated for release by the Commission next week, although the EU’s legislative procedures suggest it will likely take several years for either to be formally enacted into law.

The Commission has stated that these proposed laws will establish defined responsibilities for platforms regarding the management of unlawful and detrimental content, alongside additional obligations for the largest digital companies to encourage competition within digital markets.

Legislation concerning transparency in political advertising is also planned, as part of a Democracy Action Plan, but its implementation is not anticipated until the third quarter of the following year.

In their jointly authored letter, titled ‘Crossroads for the open Internet’, the four technology companies contend that: “The Digital Services Act and the Democracy Action Plan will either revitalize the principles of the Open Internet or reinforce existing issues – by confining our online experience to a limited number of dominant platforms, while failing to adequately address the obstacles hindering the Internet from reaching its full potential.”

Regarding the challenge of regulating digital content without compromising dynamic online discourse, they propose a more sophisticated strategy for addressing “legal-but-harmful” content, emphasizing that freedom of speech should not equate to freedom of amplification. They urge EU lawmakers to avoid limiting their policy options to simple content removal, suggesting such a binary approach would disproportionately benefit the most powerful platforms.

Instead, they recommend addressing problematic, yet legal, content by prioritizing content visibility and ensuring users have genuine control over what they encounter online, indicating support for regulations that mandate meaningful user controls over algorithmic feeds, including the option to completely disable AI-driven curation.

“Currently, discussions frequently center on content removal alone, with success measured solely by the volume of content taken down in increasingly short timeframes. Without question, illegal content – including terrorist material and child sexual abuse imagery – must be removed promptly. Indeed, numerous self-regulatory initiatives proposed by the European Commission have already demonstrated the effectiveness of a unified EU approach,” they assert.

“However, by restricting policy options to a simple ‘up or down’ choice, we miss opportunities to develop more effective alternatives that could mitigate the spread and impact of problematic content while safeguarding rights and fostering competition for smaller businesses. Content removal should not be the sole focus of Internet policy, especially when dealing with ‘legal-but-harmful’ content, as this would primarily benefit the largest companies in our industry.”

“We therefore advocate for a discussion on content moderation that distinguishes between illegal and harmful content and highlights the potential of interventions that address how content is presented and discovered. This includes providing consumers with genuine control over the curation of their online experience.”

Twitter currently allows users to switch between a chronological content display and ‘top tweets’ (its algorithmically curated feed), arguably offering some user choice in this regard. However, its platform can also introduce content into a user’s feed, even if they haven’t specifically requested it, based on algorithmic predictions of interest. Therefore, complete user control is not yet fully realized.

Facebook provides a setting to disable algorithmic curation of its News Feed, but its location within the settings is so obscure that most users are unlikely to find it, highlighting the significance of default settings and the current reality that algorithmic defaults with hidden user options do not equate to meaningful user control.

Within the letter, the companies express support for “measures promoting algorithmic transparency and control, setting limits on the discoverability of harmful content, exploring community moderation further, and providing meaningful user choice”.

“We believe that a more sustainable and comprehensive approach involves limiting the number of people exposed to harmful content. This can be achieved by prioritizing technological solutions that focus on visibility rather than prevalence,” they suggest, adding: “The specific tactics will vary depending on the service, but the underlying principle will remain consistent.”

The Commission has indicated that algorithmic transparency will be a central component of the policy package, stating in October that the proposals will require major platforms to provide information on how their algorithms function when requested by regulators.

Commissioner Margrethe Vestager explained that the goal is to “empower users – so algorithms don’t have the final say in what we see, and what we don’t see” – suggesting potential requirements to address the tech industry’s manipulative design practices.

In their letter, the four companies also voice support for harmonizing notice-and-action procedures for responding to illegal content, to clarify obligations and ensure legal certainty, while also advocating for these mechanisms to “include measures proportionate to the nature and impact of the illegal content in question”.

The four companies also desire that EU lawmakers avoid a standardized approach to regulating digital businesses and markets. While the DSA/DMA split suggests at least two distinct regulatory frameworks, a more nuanced approach is anticipated.

“We recommend a technology-neutral and human rights-based approach to ensure legislation applies to all companies and adapts to evolving technologies,” they continue, adding a critique of the contentious EU Copyright directive as a reminder of the “significant drawbacks of prescribing generalized compliance solutions”.

“Our regulations must be flexible enough to accommodate and leverage emerging trends, such as the increasing decentralization of content and data hosting,” they argue, suggesting a “forward-looking approach” can be achieved by developing regulatory proposals that “optimize for effective collaboration and meaningful transparency between companies, regulators, and civil society”.

Here, they call for “co-regulatory oversight based on regional and global standards”, to ensure Europe’s updated digital rules are “effective, durable, and protective of individual rights.  

This joint call for collaboration, including civic society, contrasts with Google’s public response to the Commission’s DSA/DMA consultation, which largely focused on opposing ex ante rules for gatekeepers (a designation Google is likely to receive).

However, regarding liability for illegal content, the tech giant also advocated for clear distinctions between the handling of illegal material and what is considered “lawful-but-harmful.”

The complete details of the DSA and DMA proposals are expected next week.

A Commission spokesperson declined to comment on the specific positions outlined by Twitter and others, stating that the regulatory proposals will be unveiled “soon” (December 15 is the scheduled date).

Last week, while presenting the bloc’s strategy for addressing politically sensitive information and disinformation online, values and transparency commissioner Vera Jourova confirmed that the forthcoming DSA will not include specific rules for removing “disputed content”.

Instead, she stated that a strengthened code of practice for tackling disinformation will be implemented, expanding the current voluntary arrangement with additional requirements. These will include algorithmic accountability and improved standards for platforms to cooperate with independent fact-checkers, as well as addressing bots and fake accounts and establishing clear rules for researcher data access – all of which are currently non-legally binding.

“We do not want to create a ministry of truth. Freedom of speech is essential, and I will not support any solution that undermines it,” said Jourova. “However, we also cannot allow our societies to be manipulated by organized efforts to sow mistrust and undermine democratic stability, and we would be naive to permit this. We must respond with determination.”

#EU#online safety#harmful content#Automattic#Mozilla#Twitter

Natasha Lomas

Natasha served as a leading journalist at TechCrunch for over twelve years, from September 2012 until April 2025, reporting from a European base. Before her time with TC, she evaluated smartphones as a reviewer for CNET UK. Earlier in her career, she dedicated more than five years to covering the realm of business technology at silicon.com – which is now integrated within TechRepublic – concentrating on areas like mobile and wireless technologies, telecommunications and networking, and the development of IT expertise. She also contributed as a freelance writer to prominent organizations such as The Guardian and the BBC. Natasha’s academic background includes a First Class Honours degree in English from Cambridge University, complemented by a Master of Arts degree in journalism from Goldsmiths College, University of London.
Natasha Lomas