eu parliament backs tighter rules on behavioural ads

The European Parliament has expressed support for increased oversight of behavioral advertising, often referred to as microtargeting, and a shift towards advertising methods that are less invasive and more focused on contextual relevance—requesting that Commission legislators also explore additional regulatory possibilities, potentially including a gradual elimination culminating in a complete prohibition.
Members of the European Parliament are also advocating for internet users to have the ability to completely disable algorithmic content selection.
This legislative proposal, originating from the Legal Affairs committee, positions the parliament in potential conflict with the core revenue strategies of major technology companies like Facebook and Google.
Parliament members also endorsed a request for the Commission to investigate the feasibility of establishing a European organization responsible for overseeing and enforcing penalties to guarantee adherence to revised digital regulations—demonstrating approval for a unified, EU-wide internet regulatory body to maintain platform accountability.
The votes cast by the EU’s elected officials are advisory in nature, but they deliver a strong message to Commission lawmakers currently developing updates to existing e-commerce regulations through the upcoming Digital Service Act (DSA) package—scheduled for release next month.
The DSA aims to revise the regulatory framework for digital services within the region, addressing contentious topics such as responsibility for content created by users and the spread of false information online. While the Commission holds the exclusive authority to propose legislation, the DSA requires the approval of both the EU parliament and the Council to become law, meaning the executive branch must consider the opinions of MEPs.
The Commission also plans to unveil a second set of regulations designed to govern ‘gatekeeper’ platforms through the implementation of proactive rules—known as the Digital Markets Act.
A representative from the Commission affirmed its intention to present both packages before the year’s end, stating: “These proposals will foster a more secure digital environment for all users, safeguarding their fundamental rights while also establishing a fair competitive landscape that enables innovative digital businesses to flourish within the Single Market and compete on a global scale.
Battle over adtech
The extensive monitoring of Internet users to enable targeted advertising – an area largely controlled by Google and Facebook – is poised to become a significant point of contention as European Commission legislators develop the Digital Services Act (DSA) package.
Last month, Facebook’s Vice President of Policy, Nick Clegg, a former Member of the European Parliament, encouraged regional lawmakers to view favorably a business approach he described as “personalized advertising.” He argued that targeting ads based on user behavior enables smaller companies to compete more effectively with larger, better-funded businesses.
Nevertheless, the legal validity of this approach is currently being challenged on several fronts within the European Union.
Numerous complaints have been submitted to EU data protection authorities regarding the widespread use of Internet users’ data by the adtech sector since the General Data Protection Regulation (GDPR) became enforceable – with these complaints questioning the legitimacy of the data processing and the validity of the consent obtained.
Only recently, a preliminary assessment by Belgium’s data protection authority determined that a primary instrument used for obtaining Internet users’ consent for ad tracking, operated by IAB Europe, does not fulfill the necessary GDPR requirements.
The utilization of Internet users’ personal data within the rapid data exchange inherent in the real-time bidding (RTB) process, central to programmatic advertising, is also under investigation by Ireland’s Data Protection Commission (DPC) following a series of complaints. The UK’s Information Commissioner’s Office (ICO) has also cautioned for over a year about systemic issues with RTB.
Furthermore, some of the longest-standing unresolved GDPR complaints concern alleged ‘forced consent’ practices by Facebook – given GDPR’s stipulation that consent must be freely given. However, Facebook does not provide an option to opt-out of behavioral targeting; users are presented with the choice of using the service or foregoing it altogether.
Google has similarly been the subject of complaints regarding this matter. Last year, France’s CNIL imposed a $57M fine on Google for failing to provide Android users with sufficiently clear information about how their data was being processed. However, the fundamental question of whether consent is required for ad targeting remains under investigation by Ireland’s DPC, nearly 2.5 years after the initial GDPR complaint was lodged – meaning a decision is forthcoming.
Moreover, Facebook’s processing of EU users’ personal data in the United States faces substantial legal uncertainty due to the conflict between EU privacy rights and US surveillance legislation.
A significant ruling (known as Schrems II) by Europe’s highest court this summer clarified that EU data protection agencies are obligated to intervene and halt data transfers to countries outside the EU when there is a risk that the information will not be adequately protected. This resulted in Ireland’s DPC issuing a preliminary order for Facebook to suspend EU data transfers.
Facebook has appealed this order through the Irish courts while seeking a judicial review of the regulator’s procedures – but the underlying legal ambiguity persists. (This is compounded by the fact that the complainant, concerned about the continued data flow, has also been granted a judicial review of the DPC’s handling of the original complaint.)
There has also been a rise in EU-based collective legal actions focused on privacy rights, as the GDPR provides a framework that litigation funders believe can be profitable.
This extensive legal activity concerning the privacy and data rights of EU citizens places pressure on Commission lawmakers to avoid weakening standards as they finalize the DSA package – with the parliament now issuing its own warning, advocating for stricter limitations on intrusive adtech.
This is not the first such call from MEPs. Earlier this summer, the parliament urged the Commission to “prohibit platforms from displaying micro-targeted advertisements and to enhance transparency for users.” While they have since moved away from advocating for an immediate, complete ban, yesterday’s votes followed more in-depth discussions – as parliamentarians aimed to earnestly debate and influence the final content of the DSA package.
Prior to the committee votes, the online ad standards organization, IAB Europe, also attempted to exert influence – releasing a statement urging EU lawmakers not to increase the regulatory burden on online content and services.
“A simplistic and sweeping condemnation of ‘tracking’ overlooks the fact that local, generalist news outlets, whose investigative reporting holds power accountable in a democratic society, cannot be funded solely through contextual advertising, as these publishers lack the resources to invest in lifestyle and other features that facilitate contextual targeting,” it stated.
“Instead of introducing redundant or conflicting provisions to existing regulations, IAB Europe encourages EU policymakers and regulators to collaborate with the industry and support established legal compliance standards, such as the IAB Europe Transparency & Consent Framework [TCF], which can even assist regulators with enforcement. The DSA should instead address clear problems requiring attention in the online environment,” it added in the statement last month.
However, as reported last week, an investigation by the Belgium DPA’s inspectorate service found that IAB Europe’s TCF does not comply with existing EU standards – suggesting the tool falls short of providing ‘model’ GDPR compliance. (Although a final decision by the DPA is still pending.)
The EU parliament’s Civil Liberties committee also presented a non-binding resolution yesterday, concentrating on fundamental rights – including support for privacy and data protection – which received the backing of MEPs.
The resolution asserted that microtargeting based on individual vulnerabilities is problematic, and also expressed concerns about the technology’s role in the dissemination of hate speech and misinformation.
The committee also secured support for a call for increased transparency regarding the monetization practices of online platforms.
‘Know your business customer’
During a series of votes yesterday, Members of the European Parliament also expressed support for establishing a mandatory ‘notice-and-action’ system. This would allow individuals online to report potentially unlawful content or actions to internet service providers, with the option of seeking resolution through a national dispute resolution process.
Legislators did not approve the implementation of upload filters or any type of proactive content monitoring for content deemed harmful or illegal. They emphasized that the ultimate determination of a content’s legality should rest with an impartial judicial system, rather than private companies.
Furthermore, they approved addressing detrimental content, including hate speech and false information, through increased transparency requirements for online platforms and by empowering citizens with improved media and digital skills to better assess such content.
A proposal from the Parliament’s Internal Market Committee to implement a ‘Know Your Business Customer’ principle—intended to fight the online sale of illicit and dangerous goods—also received approval from MEPs. This includes supporting actions to improve platforms’ and marketplaces’ ability to identify and remove inaccurate statements and address fraudulent sellers.
Parliamentarians also voiced support for establishing specific regulations to prevent, rather than simply address, market imbalances created by dominant online platforms, with the goal of fostering competition by allowing new businesses to enter the market—demonstrating agreement with the Commission’s proposal for proactive regulations targeting ‘gatekeeper’ platforms.
Liability for ‘high risk’ AI
The parliament also supported a legislative proposal advocating for regulations concerning AI — requesting that Commission legislators formulate a new legal structure detailing the ethical guidelines and legal responsibilities to be adhered to during the development, implementation, and application of artificial intelligence, robotics, and associated technologies within the EU, encompassing software, algorithms, and data.
The Commission has indicated its progress on this structure, having released a white paper earlier this year — with a comprehensive proposal anticipated in 2021.
Members of the European Parliament endorsed a stipulation that ‘high-risk’ AI systems, particularly those featuring self-learning capabilities, should be engineered to permit human supervision at all times — and advocated for a forward-looking civil liability system that would hold operators of such technologies fully accountable for any ensuing harm.
The parliament concurred that these regulations should extend to both tangible and digital AI operations that inflict injury or damage to life, health, bodily well-being, possessions, or generate substantial non-material harm resulting in “demonstrable financial loss”.
This report was updated with comment from the Commission and additional detail about the Digital Markets Act