LOGO

eu plan for risk-based ai rules to set fines as high as 4% of global turnover, per leaked draft

AVATAR Natasha Lomas
Natasha Lomas
Senior Reporter, TechCrunch
April 14, 2021
eu plan for risk-based ai rules to set fines as high as 4% of global turnover, per leaked draft

EU Considers Significant Fines for AI Regulation Violations

Legislators within the European Union, currently formulating regulations concerning the implementation of artificial intelligence, are contemplating penalties reaching up to 4% of a company’s global annual revenue. Alternatively, a fixed fine of €20 million will be applied if that amount exceeds the percentage-based calculation.

These substantial fines are proposed for a defined set of prohibited AI applications, as detailed in a leaked draft of the upcoming AI regulation. This information was initially reported by Politico and is scheduled for official release next week.

Background on the AI Regulation Plan

The initiative to regulate AI has been under consideration for some time. The European Commission initially presented a white paper in February 2020, outlining plans for the regulation of what were termed “high-risk” applications of artificial intelligence.

Early discussions among EU lawmakers centered on a sector-specific approach. Certain industries, such as energy and recruitment, were initially identified as potential areas of heightened risk. However, this strategy appears to have been revised.

Shift in Focus: From Sectoral to Application-Based Risk

The leaked draft indicates a move away from a sectoral focus. The regulation, as currently proposed, does not restrict the assessment of AI risk to specific industries or sectors.

Instead, the emphasis is placed on establishing compliance requirements for high-risk AI applications, regardless of where they are deployed. It is important to note that applications related to weapons or military purposes are explicitly excluded, as these fall outside the scope of EU treaties.

The precise definition of ‘high risk’ remains somewhat unclear based on the current draft.

Promoting Trustworthy and Human-Centric AI

The Commission’s primary objective is to enhance public confidence in AI. This will be achieved through a system of compliance checks and balances, grounded in “EU values,” to encourage the adoption of “trustworthy” and “human-centric” AI systems.

Even developers of AI applications not classified as ‘high risk’ are encouraged to adhere to established codes of conduct. This is intended “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems,” according to the Commission.

Supporting AI Development within the EU

A portion of the regulation addresses measures designed to bolster AI development within the European Union. Member States are urged to create regulatory sandboxing schemes.

These schemes will prioritize support for startups and SMEs, enabling them to develop and test AI systems before their market release.

Regulatory Sandboxes and Authority Powers

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox,” the draft states.

This empowerment will occur while fully maintaining the authorities’ existing supervisory and corrective powers.

Defining High-Risk Artificial Intelligence

Those planning to deploy artificial intelligence will be required, under proposed regulations, to assess whether a specific application qualifies as ‘high risk’. This determination will dictate whether a mandatory, pre-market compliance assessment is necessary.

The categorization of an AI system as high-risk is determined by its intended purpose – specifically, the use for which it’s designed, including the context and conditions of operation. This assessment involves a two-step process: identifying potential harms and then evaluating the severity and likelihood of those harms, as detailed in the draft legislation.

It’s important to note that classifying an AI system as high-risk under this regulation doesn’t automatically equate to a ‘high-risk’ designation under existing sectoral legislation. The criteria are distinct.

The draft outlines several potential “harms” linked to high-risk AI systems. These include physical injury or loss of life, property damage, widespread negative societal impacts, significant disruptions to essential services, and limitations to opportunities in finance, education, or employment.

Adverse impacts on access to public services and fundamental rights are also considered.

Examples of High-Risk AI Applications

The document provides several examples of applications considered high-risk. These encompass recruitment tools, systems governing access to educational institutions, emergency dispatch systems, credit scoring models, and those used to allocate taxpayer-funded benefits.

Furthermore, systems involved in crime prevention, detection, and prosecution, as well as those assisting judicial decision-making, are also included.

Systems meeting these compliance standards – including establishing a robust risk management system and conducting post-market surveillance via a quality management system – will not be prohibited from the EU market.

Additional requirements focus on security and ensuring consistent accuracy in performance. Providers are obligated to report any serious incidents or malfunctions that violate regulatory obligations to the relevant oversight authority within 15 days of discovery.

“High-risk AI systems can be launched on the EU market, provided all mandatory requirements are fulfilled,” the text clarifies.

Compliance and Risk Management

Compliance with mandatory requirements for high-risk AI systems must align with the system’s intended purpose and the provider’s established risk management system.

Risk control measures identified by the provider should carefully consider the effects and potential interactions between mandatory requirements. They must also reflect the current state of the art, including relevant harmonized standards and common specifications.

Restricted Practices and the Use of Biometrics in AI

Article 4 of the proposed legislation outlines certain AI applications deemed prohibited, as indicated in this draft – encompassing mass surveillance systems utilized for commercial purposes and broad-based social scoring mechanisms that carry the potential for discriminatory outcomes.

AI systems engineered to influence human conduct, choices, or viewpoints in a harmful manner—such as those employing deceptive UI designs known as dark patterns—are also specified as prohibited under Article 4. Systems leveraging personal data to predict and exploit the vulnerabilities of individuals or specific demographics fall into this category as well.

A superficial reading might suggest the regulation intends to immediately outlaw practices like behavioral advertising, which relies on user tracking—essentially challenging the core business models of companies such as Google and Facebook. However, this presupposes that these adtech companies will acknowledge the detrimental effects of their tools on users.

Conversely, their strategy for navigating regulations centers on asserting the opposite; as evidenced by Facebook’s emphasis on “relevant” advertisements. Consequently, the current wording of the text appears likely to trigger further protracted legal disputes as efforts are made to enforce EU law against the self-serving interpretations of technology corporations.

The rationale behind prohibiting these practices is summarized in a preceding recital within the draft, stating that artificial intelligence has the capacity to facilitate manipulative, addictive, and controlling practices, alongside indiscriminate surveillance, all of which are particularly damaging and contradict the fundamental values of the Union—namely, respect for human dignity, freedom, democracy, the rule of law, and human rights.

Notably, the Commission has refrained from proposing a ban on facial recognition technology in public spaces, despite earlier considerations as revealed in a leaked draft from the previous year. This stance shifted following the publication of last year’s White Paper, which moved away from a complete prohibition.

The leaked draft specifies that “remote biometric identification” in public areas will be subject to “stricter conformity assessment procedures involving a notified body”—meaning an “authorisation process addressing the specific risks associated with the technology.” This includes a mandatory data protection impact assessment, a more rigorous requirement than that applied to most other high-risk AI applications, which can achieve compliance through self-assessment.

The draft further stipulates that “the authorising authority should evaluate the probability and severity of harm resulting from inaccuracies in a system used for a particular purpose, especially concerning age, ethnicity, gender, or disabilities.” It also emphasizes the need to consider the societal impact, particularly regarding democratic and civic engagement, as well as the methodology, necessity, and proportionality of including individuals in the reference database.

AI systems “that could primarily result in negative consequences for personal safety” are also mandated to meet this higher standard of regulatory scrutiny as part of the compliance process.

The planned system of conformity assessments for all high-risk AIs is continuous, with the draft noting that “an AI system should undergo a new conformity assessment whenever a change occurs that may affect its compliance with this Regulation or when the system’s intended purpose is altered.”

“For AI systems that continue to ‘learn’ after being deployed or put into service—meaning they automatically adjust their functions—changes to the algorithm and performance that were not predetermined and assessed during the initial conformity assessment will necessitate a new conformity assessment of the AI system,” it adds.

Businesses that demonstrate compliance will be eligible to display a ‘CE’ mark, fostering user trust and enabling seamless access throughout the bloc’s single market.

“High-risk AI systems should bear the CE marking to demonstrate their conformity with this Regulation, allowing them to circulate freely within the Union,” the text states, further clarifying that “Member States should not impose obstacles to the marketing or deployment of AI systems that adhere to the requirements outlined in this Regulation.”

Addressing the Challenges of Bots and Deepfakes

Efforts are underway to prohibit certain practices and establish a unified set of regulations across the European Union for the safe introduction of AI systems deemed ‘high risk’ to the market.

These regulations anticipate that providers will primarily conduct self-assessments and adhere to compliance requirements.

These obligations encompass aspects like data-set quality used for model training, meticulous record-keeping, human oversight protocols, transparency measures, and ensuring accuracy.

Such adherence is expected both before a product’s launch and through continuous post-market monitoring.

Beyond these measures, the proposed regulation aims to mitigate the potential for AI to be exploited for deceptive purposes.

Transparency Requirements for AI Interactions

The regulation proposes “harmonised transparency rules” for AI systems designed to engage with individuals, such as voice AIs and chatbots.

These rules also extend to AI systems utilized for creating or altering image, audio, or video content – commonly known as deepfakes.

The text emphasizes that certain AI systems, even if not classified as high-risk, can present risks of impersonation or deception.

Consequently, specific transparency obligations should apply to these systems, independent of their risk categorization.

Specifically, individuals should be informed when they are interacting with an AI system, unless the interaction’s context makes this readily apparent.

Disclosure for Generated or Manipulated Content

Users employing AI to generate or manipulate content—images, audio, or video—that closely resembles real people, locations, or events, must disclose its artificial origin.

This disclosure is required if the content could reasonably be mistaken for authentic material.

The artificial intelligence output should be clearly labelled as such, indicating it has been artificially created or manipulated.

However, exceptions exist for situations where such content is crucial for public safety or for exercising legitimate rights, like satire, parody, or artistic expression.

Appropriate safeguards must still be in place to protect the rights and freedoms of others in these cases.

  • Transparency is key for building trust in AI.
  • Disclosure requirements help prevent deception.
  • Exceptions are made for legitimate uses like satire.

Addressing the Question of Enforcement

The European Commission’s forthcoming AI regulation, while anticipated, still presents uncertainties regarding its practical implementation. A key concern revolves around effectively overseeing compliance with the new rules, particularly given existing challenges in enforcing the EU’s data protection framework – the General Data Protection Regulation (GDPR) – which has been in effect since 2018.

The proposed legislation places the onus of responsibility on providers of high-risk AI systems to ensure their products meet all stipulated requirements. This includes registration within an EU database managed by the Commission.

National Competent Authorities and Enforcement

However, the actual enforcement of these regulations will be delegated to individual Member States. Each state will be tasked with appointing national competent authorities to supervise the application of the oversight regime.

Experience with the GDPR demonstrates potential weaknesses in this approach. The Commission has acknowledged inconsistencies in the vigor and application of GDPR enforcement across the EU. Therefore, a critical question arises: will the new AI rules suffer a similar fate, allowing for 'forum-shopping' by those seeking to avoid strict oversight?

The draft regulation stipulates that Member States “should take all necessary measures to ensure…implementation,” including establishing penalties that are “effective, proportionate and dissuasive.” Specific infringement penalties are also to be considered.

While the Commission reserves the right to intervene if Member State enforcement proves inadequate, there are currently no plans for a fundamentally different enforcement strategy. This suggests that familiar obstacles may re-emerge.

The Potential for Union-Level Intervention

The regulation includes a provision for potential Union-level action if Member States fail to adequately enforce the rules. This is based on the principle of subsidiarity, recognizing that a unified approach may be necessary given the scale and impact of AI regulation.

“Since the objective…cannot be sufficiently achieved by the Member States…the Union may adopt measures,” the draft states, outlining the Commission’s backstop for future enforcement shortcomings.

Establishing the European Artificial Intelligence Board

To facilitate oversight, the plan includes the creation of a new body – the European Artificial Intelligence Board. This entity will function similarly to the GDPR’s European Data Protection Board, providing recommendations and opinions to EU lawmakers.

Its role will encompass guidance on prohibited AI practices and the categorization of high-risk systems, supporting consistent application of the regulation across the Union.

#EU AI regulation#artificial intelligence#AI fines#AI rules#global turnover#leaked draft

Natasha Lomas

Natasha's Extensive Journalism Career

Natasha served as a senior reporter with TechCrunch for over twelve years, spanning from September 2012 to April 2025. Her reporting was conducted from a European base.

Prior to her time at TechCrunch, she gained experience reviewing smartphones for CNET UK. This followed a five-year period dedicated to business technology coverage.

Early Career at silicon.com

Natasha’s earlier career included a significant role at silicon.com, which has since been integrated into TechRepublic. During this time, her focus encompassed several key areas.

  • Mobile and wireless technologies
  • Telecoms & networking infrastructure
  • IT skills and training

She consistently delivered insightful reporting on these evolving technological landscapes.

Freelance Contributions

Beyond her staff positions, Natasha broadened her journalistic portfolio through freelance work. She contributed articles to prominent organizations such as The Guardian and the BBC.

Educational Background

Natasha’s academic credentials include a First Class degree in English from Cambridge University. She furthered her education with an MA in journalism from Goldsmiths College, University of London.

These qualifications provided a strong foundation for her successful career in technology journalism.

Natasha Lomas