Europe to Regulate Adtech in Fight Against Disinformation

EU Intensifies Fight Against Online Disinformation
The European Union is preparing to strengthen its response to the proliferation of online disinformation. Today, the Commission announced it will increase efforts to address harmful content that isn't legally prohibited. This includes encouraging smaller digital services and advertising technology firms to adopt voluntary guidelines designed to counter the dissemination of manipulative and often damaging information.
Drivers for Increased Action
EU legislators highlighted the risks posed by disinformation, specifically citing threats to public health stemming from the spread of misleading information regarding COVID-19 vaccines as a key impetus for more robust measures.
Concerns regarding the impact of online disinformation on democratic processes are also a significant factor driving this initiative.
Commissioner Breton's Statement
Thierry Breton, Commissioner for Internal Market, stated: “It is essential to curtail the ‘infodemic’ and the spread of inaccurate information that endangers lives. Disinformation should not be a source of profit. We require stronger commitments from online platforms, the entire advertising sector, and networks of fact-checkers. The Digital Services Act will equip us with additional, effective tools to combat disinformation.”
New Code of Practice Under Development
A revised and more comprehensive code of practice on disinformation is currently being drafted. The Commission anticipates its finalization in September, with implementation scheduled to begin at the start of the following year.
Acknowledging Limitations of the Existing Approach
This shift represents a fairly public acknowledgement that the EU’s existing voluntary code of practice – an approach adopted since 2018 – has not yielded the desired results.
A crucial aspect of this renewed effort is securing the cooperation of the adtech industry in demonetizing the spread of viral disinformation, a step that is long overdue.
The Persistent Problem of Online Disinformation
It is evident that the issue of online disinformation remains unresolved. Reports suggest that problematic activities, such as social media voter manipulation and computational propaganda, have worsened in recent years.
However, gaining a clear understanding of the true extent of the disinformation problem is challenging, as ad platforms are often reluctant to provide external researchers with full access to their systems.
The Commission aims to address this lack of transparency.
Current Signatories to the EU Code of Practice
The following entities are currently signatories to the EU’s code of practice on disinformation:
- Facebook (Meta)
- Twitter (X)
- Microsoft
- TikTok
Expanding Participation
EU lawmakers aim to broaden participation by including smaller platforms and recruiting all players within the adtech space who facilitate the monetization of online disinformation.
Commissioners emphasized the need for the code to encompass a “wide range” of actors in the online advertising industry, extending beyond the current limited number.
Enhanced Information Sharing
The Commission also proposes that platforms and adtech companies exchange information regarding disinformation ads that have been rejected by one party. This would foster a more coordinated response to eliminate malicious actors.
Assessment of Current Signatories’ Performance
The Commission’s evaluation of the current signatories’ performance has been largely unfavorable.
During a press conference, Commissioner Breton indicated that only one of the five platform signatories has “truly” fulfilled its commitments.
Breton refrained from explicitly naming the four underperforming platforms, stating that it is not the Commission’s role to do so.
He suggested that the public should assess which of the signatory platforms have failed to meet their obligations. (Signatories pledged to disrupt ad revenues of disinformation sources, enhance transparency in political advertising, combat fake accounts, empower users to report disinformation, improve access to diverse news sources, and facilitate research access to platform data.)
Determining which of the tech giants is meeting the Commission’s standards is difficult. (Microsoft may be an exception due to its comparatively limited social media activity.)
Over the past three years, there has been considerable rhetoric regarding disinformation, but limited demonstrable accountability from major social platforms.
Facebook’s Response
Coincidentally, Facebook chose today to highlight its historical efforts to combat “influence operations” – defined as “coordinated efforts to manipulate or corrupt public debate for a strategic goal” – by publishing a “threat report” covering the period between 2017 and 2020.
The report states that Facebook removed and publicly reported only 150 such operations during that timeframe.
Concerns Regarding Facebook’s Response
However, as revealed by Facebook whistleblower Sophie Zhang, the scale of malicious manipulation on Facebook’s platform is substantial, and the company’s response is under-resourced and primarily focused on public relations.
(A memo by Zhang, covered by BuzzFeed, detailed a lack of support for her work and the rapid re-emergence of removed influence operations without intervention from Facebook.)
Facebook also publishes an “Inauthentic Behavior Report” detailing efforts against deceptive tactics that don’t meet the criteria for “Coordinate Inauthentic Behavior”.
Towards Legally Binding Rules
While legally binding rules on handling online disinformation are not currently in the EU’s pipeline, commissioners expressed a desire for a “more binding” code of practice.
The Digital Services Act (DSA), a broader package of digital reforms, provides potential leverage. The DSA will introduce legally binding rules for handling illegal content, and the Commission intends for the strengthened disinformation code to complement it.
This approach may incentivize platforms to comply with the DSA by offering “credibility” for adherence to the code.
Avoiding Censorship
Brussels maintains that it does not intend to legislate on disinformation, as a centralized approach could be perceived as censorship.
The EU’s digital regulation packages aim to increase transparency, safety, and accountability online, according to Vera Jourova, Commissioner for Values and Transparency.
Timing and Adaptation
Commissioner Breton stated that now is the “appropriate time” to deepen obligations under the disinformation code, coinciding with the implementation of the DSA, and to allow platforms time to adapt and contribute to shaping additional obligations.
Audit Powers for Regulators
Breton also emphasized the need for regulators to “audit platforms” to monitor the algorithms that promote problematic practices.
The feasibility of audit powers within a voluntary, non-legally binding framework remains uncertain.
Addressing Code Shortcomings
Jourova highlighted inconsistencies in the application of the current code across different EU Member States and languages.
She also expressed a desire for the revised code to empower users to report suspicious content and to appeal content takedowns, mitigating the risk of erroneous removals.
The focus will be on addressing false “facts, not opinions”, with a goal of integrating fact-checking into platforms’ systems and fostering a “decentralized culture of facts”.
Data Access for Researchers
Jourova noted that current signatories have not provided external researchers with the level of data access the Commission desires, hindering transparency and accountability.
The code requires reports from signatories, but the information provided has not provided a comprehensive picture of disinformation activity.
Evolving Tactics and Problematic Techniques
Jourova warned that online manipulation tactics are constantly evolving and innovative. She suggested that signatories should agree on a set of identifiable “problematic techniques” to expedite responses.
Addressing Political Ads
EU lawmakers will present a specific plan for enhancing transparency in political advertising in November.
Countering Foreign Interference
They are also working on strategies to counter threats to European democracies posed by foreign interference, such as influence operations on platforms like Facebook.
Commissioners indicated that imposing costs on perpetrators of disinformation, potentially including trade sanctions for state-backed operations, is being considered.
Protecting Democratic Values
Breton reiterated the importance of countering foreign influence over the “informational space” to safeguard European democratic values.
The Commission’s efforts will also focus on education to equip citizens with the critical thinking skills needed to navigate the vast and often unreliable information landscape.
This report was updated with a correction as we originally misstated that the IAB is not a signatory of the code; in fact it joined in May 2018.
Related Posts

Amazon Updates Copyright Protection for Kindle Direct Publishing

Figma AI: Remove Objects & Extend Images with New Tools

Pebble AI Smart Ring: Record Notes with a Button - $75

Spotify Now Offers Music Videos in the US & Canada | Spotify News

SoftBank, NVIDIA in Talks to Fund Skild AI at $14B Valuation
