LOGO

EU AI Act: High-Risk AI Systems Banned

February 2, 2025
EU AI Act: High-Risk AI Systems Banned

EU AI Act: First Compliance Deadline Arrives

Beginning this Sunday within the European Union, regulatory bodies are now empowered to prohibit the deployment of AI systems identified as presenting an “unacceptable risk” or potential for harm.

February 2nd marks the initial compliance date stipulated by the EU’s AI Act, a comprehensive regulatory framework for artificial intelligence. This legislation received final approval from the European Parliament last March, following an extensive development period. While the Act was officially enacted on August 1st, this current date represents the first in a series of compliance milestones.

Scope of the AI Act

As detailed in Article 5, the Act aims to encompass a wide range of AI applications and their interactions with individuals, spanning from everyday consumer tools to complex physical environments.

The EU’s regulatory strategy categorizes risk into four levels: (1) Minimal risk applications, such as email spam filters, will remain largely unregulated; (2) limited risk applications, including customer service chatbots, will be subject to minimal oversight; (3) high-risk AI systems – for instance, those providing healthcare recommendations – will face stringent regulatory scrutiny; and (4) applications deemed to pose an unacceptable risk will be entirely prohibited, a focus of this month’s requirements.

Unacceptable AI Practices

Certain AI activities are now considered unacceptable and are subject to prohibition. These include:

  • The utilization of AI for social scoring, involving the creation of risk profiles based on individual behavior.
  • AI systems designed to manipulate individuals through subliminal or deceptive techniques.
  • AI that takes advantage of vulnerabilities related to age, disability, or socioeconomic status.
  • The use of AI to predict criminal behavior based on a person’s physical characteristics.
  • AI employing biometric data to infer sensitive personal attributes, such as sexual orientation.
  • The collection of “real time” biometric data in public spaces for law enforcement purposes.
  • AI attempting to deduce individuals’ emotional states in workplace or educational settings.
  • The creation or expansion of facial recognition databases through the scraping of online images or security camera footage.

Penalties for Non-Compliance

Organizations found to be utilizing any of the prohibited AI applications within the EU will be subject to substantial financial penalties, irrespective of their global headquarters location.

Potential fines can reach up to €35 million (approximately $36 million), or 7% of the company’s annual revenue from the preceding fiscal year, whichever amount is higher.

However, the enforcement of these fines is not immediate, as explained by Rob Sumroy, head of technology at Slaughter and May, in a recent TechCrunch interview.

“While organizations are expected to demonstrate full compliance by February 2nd, the critical deadline for companies to note is August,” Sumroy stated. “By that time, the designated competent authorities will be identified, and the provisions regarding fines and enforcement will become active.”

Initial Commitments

The deadline of February 2nd, while significant, largely represents a procedural step.

In September of the previous year, more than 100 organizations endorsed the EU AI Pact, a non-binding commitment to proactively implement the tenets of the AI Act prior to its official enactment. Participants, including prominent firms like Amazon, Google, and OpenAI, pledged to pinpoint AI systems anticipated to fall under the high-risk classification as defined by the Act.

Several major technology companies, most notably Meta and Apple, chose not to participate in the Pact. Mistral, a French AI startup and vocal detractor of the AI Act, also declined to sign the agreement.

This decision doesn't necessarily indicate that Apple, Meta, Mistral, or other non-signatories will fail to fulfill their legal requirements, including the prohibition of unacceptably dangerous systems. As Sumroy observes, the specific prohibited applications outlined in the Act are unlikely to be pursued by most companies in any case.

A primary concern for organizations regarding the EU AI Act centers on the timely availability of definitive guidance, standardized benchmarks, and ethical codes. Furthermore, clarity regarding compliance procedures is paramount. “However, the working groups are, to date, adhering to their schedules concerning the code of conduct for developers,” Sumroy stated.

Exceptions to the AI Act’s Restrictions

Several of the prohibitions outlined in the AI Act are subject to specific exceptions.

For instance, the legislation allows for the utilization of certain biometric systems by law enforcement in public areas, provided they are employed for a “targeted search,” such as locating a missing child, or for mitigating an “imminent and substantial” risk to human life.

This exemption necessitates approval from the relevant authorities, and the Act emphasizes that law enforcement agencies cannot base decisions with “adverse legal consequences” for individuals solely on the results generated by these systems.

Workplace and Educational Exceptions

Furthermore, the Act includes exemptions for systems designed to infer emotions within workplaces and educational institutions, but only when a “medical or safety” rationale exists, like applications intended for therapeutic purposes.

The European Commission, the EU’s executive body, indicated that supplementary guidance would be issued in “early 2025” after stakeholder consultations in November.

However, these additional guidelines are currently still pending publication.

Uncertainties and Interactions with Existing Laws

Sumroy highlighted the existing ambiguity regarding the interplay between the AI Act’s prohibitions and existing legal frameworks.

Complete clarity may not be achieved until later in the year, as the period for enforcement draws nearer.

“Organizations must recognize that AI regulation does not operate independently,” Sumroy stated. “Existing legal structures, including GDPR, NIS2, and DORA, will interact with the AI Act, potentially creating complexities—specifically concerning overlapping requirements for incident reporting.”

Understanding the integration of these laws will be as vital as comprehending the AI Act itself.

#EU AI Act#AI regulation#artificial intelligence#AI ban#high-risk AI#AI law