LOGO

EU to Regulate Political Ads & Combat Disinformation in 2024

December 3, 2020
EU to Regulate Political Ads & Combat Disinformation in 2024

European Union legislators are preparing to introduce updated regulations for digital political advertising in the coming year, focused on increasing openness regarding promotional political material.

The Commission announced today its desire for individuals, community groups, and governing bodies to readily identify the origin and intent behind the political advertisements they encounter online.

“It is our firm belief that individuals have a right to understand the reasons behind the advertisements they view, the entities funding them, the financial investment involved, and the specific microtargeting techniques employed,” stated commissioner Vera Jourova during a press conference presenting a Democratic Action Plan.

“Emerging technologies should empower individuals—rather than be used to influence them,” she further explained.

According to the plan, the upcoming proposal for political ad transparency will “focus on those funding paid content, as well as the methods of its creation and dissemination, including online services, advertisers, and political strategists, clearly defining their obligations and establishing a firm legal framework.”

“This undertaking will establish which parties and what kinds of sponsored content are subject to the stricter transparency standards. It will promote responsibility and allow for the oversight and implementation of applicable regulations, inspections, and access to anonymized data, as well as support thorough investigation,” the document continues.

The Commission aims to have these new regulations implemented well before the European Parliament elections in May 2024—with the commissioner for values and transparency confirming the legislative action is scheduled for the third quarter of 2021.

Democracy Action Plan

This initiative is part of a comprehensive Democracy Action Plan, a series of actions designed to support free and equitable elections throughout the EU, enhance media diversity, and improve media understanding over the next four years during the Commission’s term.

The Commission is responding to increasing anxieties that current election regulations haven’t adapted to the changes brought about by digital technologies, including the proliferation of online misinformation, which poses risks to democratic principles and public confidence.

There is concern that established procedures are being overwhelmed by the capabilities of digital advertising, which often operates without transparency and relies heavily on extensive personal data.

According to the Commission’s plan, “The substantial expansion of online campaigning and digital platforms has… created new vulnerabilities and complicated efforts to safeguard election integrity, ensure a free and diverse media landscape, and shield the democratic process from disinformation and other forms of manipulation.” The plan also recognizes that digitalization has facilitated the flow of undisclosed funds to political organizations.

The plan further identifies issues such as “cyberattacks aimed at election infrastructure; journalists experiencing online harassment and hate speech; coordinated disinformation campaigns rapidly disseminating false and divisive content through social media; and the significant impact of opaque algorithms used by popular communication platforms”.

During a press conference, Jourova stated her desire to prevent European elections from becoming “a contest of underhanded tactics,” and added: “We have seen enough with events like the Cambridge Analytica scandal and the Brexit referendum.”

However, the Commission is not currently proposing a prohibition on political microtargeting.

Its immediate focus will be on restricting its application in political contexts, such as limiting the criteria used for targeting. (As Jourova explained, “Promoting political viewpoints differs from promoting products.”)

The Commission states it will explore “further restrictions on micro-targeting and psychological profiling within the political sphere”.

“Specific obligations could be appropriately imposed on online intermediaries, advertising service providers, and other relevant parties, based on their size and influence (including requirements for labeling, record-keeping, disclosure, price transparency, and criteria for targeting and amplification),” the plan suggests. “Additional measures could involve specific collaboration with supervisory authorities and the development of co-regulatory codes and professional standards.”

The plan acknowledges that microtargeting and behavioral advertising can hinder accountability for political actors—and that these tools and techniques can be “exploited to spread divisive and polarizing narratives”.

It further notes that the personal data of citizens used to power such manipulative microtargeting may have been “acquired improperly”.

This is a significant recognition of the existing problems within the adtech industry—a concern voiced by European privacy and legal professionals for many years. They have recently cautioned that EU data protection regulations, updated in 2018, are not being adequately enforced in this area.

The UK’s ICO, for instance, is currently facing legal challenges regarding its lack of regulatory action against unlawful adtech practices. (Notably, back in 2018, its commissioner issued a report warning that democracy is being undermined by the improper use of personal data combined with ad-targeting techniques on social media platforms.)

The Commission has taken these concerns into account. However, its strategy for addressing them remains somewhat unclear.

“There is a clear need for greater transparency in political advertising and related commercial activities, and stronger enforcement and adherence to the General Data Protection Regulation (GDPR) rules is essential,” it states—reiterating a finding this summer, in its two-year GDPR review, where it acknowledged that the regulation’s effectiveness has been hampered by inconsistent and insufficient enforcement.

The central message from the Commission is now that ‘GDPR enforcement is crucial for democracy.’

However, enforcement is the responsibility of national data protection authorities. Therefore, unless this enforcement gap is addressed, it is uncertain whether the Commission’s action plan will fully achieve the desired democratic resilience. While media literacy is a valuable objective, it is a gradual process compared to the immediate impact of data-driven adtech tools.

 

“Regarding the Cambridge Analytica case, I mentioned it because we do not want a situation where political marketing relies on privileged access to or possession of individuals’ private data [without their permission],” Jourova explained during a question-and-answer session with the press, acknowledging the shortcomings of GDPR enforcement.

“[Following the scandal] we believed that the implementation of GDPR would protect us against such practices—that people would need to provide consent and be aware of it—but we now see that relying solely on consent or leaving it to citizens to provide consent may be insufficient.”

Jourova described the Cambridge Analytica scandal as “a pivotal moment for all of us”.

“Enforcement of privacy regulations is not enough—that’s why we are presenting the European Democracy Action Plan with a vision for the coming year that includes rules for political advertising, where we are seriously considering limiting microtargeting as a method used to promote political powers, parties, or individuals,” she added.

The Commission states that its legislative proposal on the transparency of political content will complement broader regulations on online advertising outlined in the Digital Services Act (DSA) package—scheduled to be presented later this month (establishing a range of responsibilities for platforms). Consequently, the complete details of its proposed regulation of online advertising are still forthcoming.

Strengthened Efforts to Combat Disinformation

A key component of the Democracy Action Plan centers on addressing the increasing prevalence of online disinformation.

The coronavirus pandemic has highlighted significant public health risks, raising concerns that false information could hinder successful COVID-19 vaccination initiatives. European Union legislators’ attention to this issue has been further intensified by the pandemic’s impact.

Regarding disinformation, the Commission intends to revise its existing, self-regulatory framework for tackling online false information—known as the Code of Practice on disinformation, initially launched in 2018 with limited participation from the tech sector—and will place greater demands on major platforms to detect and prevent coordinated manipulative activities through a planned transition to a co-regulatory system featuring “obligations and accountability.”

The revised disinformation code will operate in conjunction with the DSA, which will establish broad accountability standards for platforms. However, the strengthened code is designed to complement the DSA and/or fill any gaps until the DSA is fully implemented, a process anticipated to take “years” due to standard EU legislative procedures, according to Jourova.

“Our focus will not be on regulating the removal of contested content,” she stressed, outlining the plan to enhance the disinformation code. “We are committed to preserving freedom of speech and will not support any measures that compromise it. However, we cannot allow our societies to be manipulated by organized efforts to sow distrust and undermine democratic principles, and it would be unwise to do so. We must respond decisively.”

“The concerning trend of disinformation, as we all recognize, is particularly evident regarding COVID-19 vaccines,” she continued. “Supporting the vaccine strategy requires an effective response to disinformation.”

When questioned about how the Commission will ensure platforms comply with the new code’s requirements, Jourova indicated that the DSA will likely empower Member States to determine which authorities will be responsible for enforcing future platform accountability regulations.

The DSA will prioritize “increased accountability and obligations to implement risk mitigation measures,” she explained, adding that the disinformation code (or a comparable arrangement) will be considered a risk mitigation measure—encouraging platforms and other stakeholders to participate.

“We are already working closely with the major platforms,” she added, responding to a question about whether the Commission had delayed addressing the threat posed by COVID-19 vaccine disinformation. “We will not wait for the upgraded code of practice, as we already have a clear understanding with the platforms that they will continue the actions they began earlier this year.”

Platforms are currently promoting accurate, authoritative health information to counter COVID-19 disinformation, she noted.

“Concerning vaccinations, I have already contacted Google and Facebook to express our desire to expand this effort. We are developing and implementing a communications strategy to promote vaccination as the dependable—and potentially the only dependable—method for overcoming COVID-19,” she stated, adding that this work is “well underway.”

However, Jourova emphasized that the forthcoming upgrade to the code of practice will introduce additional requirements, including those related to algorithmic accountability.

“We need a clearer understanding of how platforms determine who sees what content and the reasoning behind those decisions,” she said. “Furthermore, there must be established guidelines for researchers to access relevant data. We also need measures to reduce the financial incentives for spreading disinformation. Finally, I want to see improved standards for collaboration with fact-checkers, as the current situation is inconsistent and we seek a more systematic approach.”

The code must also incorporate “clearer and more effective” methods for addressing manipulation involving bots and fraudulent accounts, she added.

The new Code of Practice on disinformation is anticipated to be finalized following the new year.

Current signatories include TikTok, Facebook, Google, Twitter and Mozilla.

#EU#political ads#disinformation#transparency#regulation#Europe