LOGO

EU Accuses Tech Giants of COVID-19 Disinformation Cover-Up

June 3, 2021
EU Accuses Tech Giants of COVID-19 Disinformation Cover-Up

EU Lawmakers Extend Disinformation Reporting Mandate for Tech Giants

Legislators within the European Union have requested that major technology companies prolong their reporting on initiatives aimed at curbing the dissemination of misinformation concerning vaccines on their respective platforms for an additional six months.

The Commission asserts that continuing the monitoring program is vital, as vaccination efforts across the EU are progressing at a consistent rate. The coming months are considered crucial for achieving high vaccination levels within Member States. Preventing the fueling of vaccine hesitancy through detrimental disinformation is paramount during this critical period.

Platform Participation and Reporting Frequency

Facebook, Google, Microsoft, TikTok, and Twitter have committed to submitting monthly reports as participants in the bloc’s Code of Practice on Disinformation. This code is non-legally binding. However, these companies will transition to bi-monthly reporting in the future.

Upon publishing the latest platform reports for April, the Commission indicated that the tech giants have demonstrated an inability to independently police “dangerous falsehoods.” Simultaneously, the Commission continues to express concerns regarding the quality and detail of the data voluntarily provided by platforms regarding their efforts to combat online disinformation.

Concerns and the Need for Robust Monitoring

“These reports underscore the importance of effectively monitoring the measures implemented by platforms to mitigate disinformation,” stated Věra Jourová, the EU’s VP for values and transparency. “We have decided to extend this program due to the persistent flow of dangerous lies into our information environment. It will also inform the development of a new generation Code against disinformation.”

Jourová emphasized the necessity of a robust monitoring program and clearer metrics to assess the impact of platform actions. She asserted that platforms cannot effectively self-regulate in this domain.

Strengthening the Voluntary Code

Last month, the Commission announced plans to enhance the voluntary Code, also seeking greater participation – particularly from the adtech sector – to help de-monetize harmful content.

The Code of Practice initiative originated before the pandemic, launching in 2018 amid growing concerns about the impact of “fake news” on democratic processes and public discourse. The COVID-19 crisis intensified these concerns, bringing the issue of dangerous misinformation into sharper focus for lawmakers.

A Co-Regulatory Approach and the Digital Services Act

Currently, EU lawmakers do not intend to establish legally binding regional regulation of online disinformation. They favor a voluntary, “co-regulatory” approach. This encourages platform action and engagement concerning potentially harmful – but not illegal – content.

This includes offering tools for users to report issues and appeal takedowns, without the threat of direct legal penalties for non-compliance.

However, the Digital Services Act (DSA) will provide a new mechanism to increase pressure on platforms. The DSA sets rules for handling illegal content. Commissioners suggest that platforms actively collaborating with the EU’s disinformation Code may receive more favorable consideration from DSA regulators.

A New Chapter in Countering Disinformation

Thierry Breton, the commissioner for the EU’s Internal Market, stated that the combination of the DSA and the strengthened Code will usher in “a new chapter in countering disinformation in the EU.”

He added that he expects platforms to intensify their efforts and deliver the strengthened Code of Practice promptly, aligning with the Commission’s Guidance.

The Challenges of Regulating Disinformation

Regulating disinformation presents challenges, as the value of online content can be subjective. Centralized orders to remove information, even if demonstrably false or absurd, risk accusations of censorship.

Removing COVID-19-related disinformation is less contentious due to the clear risks to public health, such as anti-vaccination messaging or the sale of substandard PPE.

Promoting Pro-Speech Measures

The Commission appears to prioritize pro-speech measures taken by platforms, such as promoting vaccine-positive messaging and highlighting authoritative information sources. For example, Facebook launched vaccine profile picture frames to encourage vaccination, and Twitter introduced prompts during World Immunisation Week in 16 countries, generating 5 million impressions.

Platform Reporting on Content Removals

The April reports from Facebook and Twitter included more detailed information on actual content removals.

Facebook reported removing 47,000 pieces of content in the EU for violating COVID-19 and vaccine misinformation policies, a slight decrease from the previous month.

Twitter reported challenging 2,779 accounts, suspending 260, and removing 5,091 pieces of content globally related to COVID-19 disinformation in April.

Google reported taking action against 10,549 URLs on AdSense, a “significant increase” compared to March.

Interpreting the Data

However, the increase in removals by Google is open to interpretation. It could indicate improved enforcement or a surge in COVID-19 disinformation on its ad network.

The ongoing challenge for regulators is quantifying the actions of these tech giants and understanding their effectiveness without standardized reporting and full data access.

For that, regulation – not selective self-reporting – would be required.

#COVID-19#disinformation#tech giants#EU#social media#misinformation