EU to Regulate Big Tech Algorithms - Transparency Plan

Leading online platforms will face new obligations to reveal how their internal systems operate, as European legislators prepare to unveil proposals next month.
During a recent address, European Commission Executive Vice President Margrethe Vestager indicated that accountability for algorithms will be a central component of the upcoming digital legislation package. This will involve new regulations requiring platforms to clarify the mechanics of their recommendation systems and grant users increased control over them.
“The regulations we are developing will establish a responsibility for all digital services to cooperate with regulatory bodies. Furthermore, the largest platforms will be required to furnish more detailed information regarding their algorithmic processes when requested by regulators,” she stated, adding that platforms will also be obligated to “provide regulators and researchers with access to their data holdings—including advertising archives.”
Although platforms such as Facebook have already established ad archives independently, researchers continue to express concerns about the structure of this information and its accessibility for impartial analysis.
Enhanced user information concerning ad targeting is another planned requirement, alongside expanded reporting obligations for platforms to explain their content moderation choices, as outlined by Vestager—who also previewed the forthcoming Digital Services Act and Digital Markets Act in a separate speech earlier this week.
Lawmakers across the region are addressing anxieties that ‘blackbox’ algorithms can have detrimental consequences for both individuals and society—stemming from how they process data and organize and prioritize information, with potential risks including discrimination, the amplification of biases, and the targeted abuse of vulnerable populations.
The Commission has announced its intention to implement legally binding transparency regulations to compel technology companies to assume greater responsibility for the content their platforms promote and profit from. The specifics of these requirements and their effective enforcement remain to be determined, but a draft of the plan is anticipated within the next month.
“A primary objective of the Digital Services Act, which we will present in December, is to safeguard our democracy by ensuring platforms are transparent about how these algorithms function—and to hold those platforms accountable for the decisions they make,” Vestager explained during a speech at an event hosted by the non-profit research advocacy organization AlgorithmWatch.
“The proposals we are developing will require platforms to inform users about how their recommendation systems determine which content is displayed—allowing us to better assess the reliability of the worldview they present.”
Under the proposed regulations, the most influential Internet platforms—referred to as ‘gatekeepers’ within the EU—will be required to submit regular reports on “the content moderation tools they employ, and the precision and outcomes of those tools,” according to Vestager.
There will also be specific disclosure requirements for ad targeting that go beyond the limited disclosures currently offered by platforms like Facebook (through its ‘why am I seeing this ad?’ feature).
“More comprehensive information” will be necessary, she said, such as platforms informing users “who placed a particular ad, and the rationale behind its targeting.” The overall goal is to ensure users have “a clearer understanding of who is attempting to influence us—and a greater ability to identify instances of algorithmic discrimination.”
Today, a coalition of 46 civil society organizations, spearheaded by AlgorithmWatch, urged the Commission to ensure the transparency requirements in the upcoming legislation are “meaningful”—calling for the introduction of “comprehensive data access frameworks” that equip watchdogs with the tools needed to hold platforms accountable, and to enable journalists, academics, and civil society to “challenge and scrutinize power.”
The group’s recommendations advocate for legally binding disclosure obligations based on the technical functionalities of dominant platforms; a single EU institution “with a clear legal mandate to enable access to data and to enforce transparency obligations”; and provisions to guarantee data collection adheres to EU data protection regulations.
Another approach to address the imbalance of power between data-intensive platform companies and the individuals they track, profile, and target could involve allowing users to completely disable algorithmic feeds if they choose—opting out of the potential for data-driven discrimination or manipulation. However, it remains uncertain whether EU lawmakers will pursue this option in the forthcoming legislative proposals.
Vestager only hinted at this possibility, stating that the planned regulations “will also empower users—ensuring that algorithms do not have the final say in what we see, and what we do not see.”
Platforms will also be required to provide users “the ability to influence the choices that recommender systems make on their behalf,” she added.
She further confirmed that there will be more detailed reporting requirements for digital services regarding content moderation and removals—stating they will be required to notify users when content is removed, and provide them with “effective rights to contest that removal.” While there is broad public support throughout the bloc for updating the rules governing digital giants, there are also strong opinions that regulation should not infringe upon online freedom of expression—such as by prompting platforms to reduce their regulatory risk by implementing upload filters or removing contentious content without justification.
The proposals will require the approval of EU Member States, through the European Council, and elected representatives in the European Parliament.
The latter has already voted in favor of stricter rules on ad targeting. MEPs also urged the Commission to reject the use of upload filters or any form of ex-ante content control for harmful or illegal content, asserting that the final determination of content legality should be made by an independent judiciary.
Concurrently, the Commission is developing regulations specifically for applications utilizing artificial intelligence—but that legislative package is not scheduled for release until next year.
Vestager confirmed that it will be introduced in early 2021 with the objective of creating “an AI ecosystem of trust.”
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
