EU AI Rules: Risk-Based Plan to Boost Trust & Adoption

EU Lawmakers Unveil AI Regulation Proposal
Legislators within the European Union have put forth a proposal for regulating applications of artificial intelligence deemed to carry significant risk, operating within the bloc’s unified market.
Prohibitions and Restrictions
The plan incorporates outright prohibitions for a limited number of applications considered excessively dangerous to individual safety or the fundamental rights of EU citizens. These include systems resembling China’s social credit model and AI-driven techniques designed to manipulate behavior, potentially causing psychological or physical harm.
Restrictions are also proposed for law enforcement’s utilization of biometric surveillance in public areas, though these are accompanied by extensive exemptions.
Scope of Regulation
The majority of AI applications will not be subject to regulation, or even a ban, under this proposal. However, a specific subset identified as “high risk” will be subject to both pre-market and post-market regulatory requirements.
Furthermore, transparency requirements are stipulated for certain AI use-cases, such as chatbots and deepfakes, with the intention of mitigating potential risks through user awareness of interacting with artificial systems.
Extraterritorial Application
The planned law is designed to apply to any entity offering an AI product or service within the EU, irrespective of whether they are based within the Union. This mirrors the extraterritorial scope of the EU’s data protection regulations.
Fostering Public Trust and Innovation
The primary objective for EU lawmakers is to cultivate public confidence in the implementation of AI, thereby encouraging wider adoption of the technology. Senior Commission officials emphasize a desire to establish an “excellence ecosystem” aligned with European values.
Margrethe Vestager, Commission EVP, stated the aim is to position Europe as a global leader in the development of secure, trustworthy, and human-centered Artificial Intelligence and its application.
She further explained that the regulation addresses human and societal risks while simultaneously outlining steps for Member States to boost investment and innovation, ensuring excellence and increased AI uptake across Europe.
Defining “High Risk” AI
The proposal introduces mandatory requirements for applications categorized as “high risk” – those presenting a clear safety risk or potentially infringing upon EU fundamental rights, such as the right to non-discrimination.
Annex 3 of the regulation details examples of high-risk AI use-cases, with the Commission retaining the authority to expand this list as AI technology evolves and new risks emerge.
Examples of High-Risk Applications
Currently cited high-risk examples include:
- Biometric identification and categorization of individuals
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management, and access to self-employment
- Access to essential private and public services
- Law enforcement
- Migration, asylum, and border control
- Administration of justice and democratic processes
Applications of AI within the military domain are specifically excluded from the regulation’s scope, as it focuses on the bloc’s internal market.
Obligations for High-Risk AI Developers
Developers of high-risk applications will be required to meet a series of pre-market obligations, including ensuring the quality of data-sets used for AI training and implementing human oversight throughout the system’s design and use.
Ongoing post-market surveillance will also be required.
Additional requirements include maintaining records for compliance checks and providing relevant information to users. The robustness, accuracy, and security of the AI system will also be subject to regulation.
Regulation of Low-Risk AI
Commission officials anticipate that the majority of AI applications will fall outside the highly regulated category. Developers of these ‘low risk’ systems will be encouraged to adopt non-legally binding codes of conduct.
Penalties for Non-Compliance
Infringement of the rules prohibiting specific AI use-cases can result in penalties of up to 6% of global annual turnover or €30M, whichever is greater. Violations related to high-risk applications can incur penalties of up to 4% (or €20M).
Enforcement Mechanisms
Enforcement will be distributed across multiple agencies within each EU Member State, leveraging existing bodies such as product safety authorities and data protection agencies.
Concerns have been raised regarding the adequate resourcing of national bodies and the potential for enforcement bottlenecks, mirroring issues experienced with the EU’s General Data Protection Regulation.
However, the Commission has incorporated provisions allowing it to investigate potential non-compliance by notified bodies and issue reasoned decisions if Member State agencies fail to meet their obligations.
Databases and Boards
An EU-wide database will be established to register high-risk systems implemented within the bloc, managed by the Commission.
The European Artificial Intelligence Board (EAIB) will be created to ensure consistent application of the regulation, mirroring the role of the European Data Protection Board for the GDPR.
Supporting AI Development
The plan includes measures to coordinate EU Member State support for AI development, building upon the 2018 AI for EU strategy and a 2021 update to the Coordinated Plan. These measures include establishing regulatory sandboxes and co-funding Testing and Experimentation Facilities.
A network of European Digital Innovation Hubs will serve as ‘one-stop shops’ to assist SMEs and public administrations in enhancing their competitiveness in the AI sector, with targeted EU funding available to support homegrown AI.
Investment in AI
Internal market commissioner Thierry Breton highlighted the importance of investment, stating that €1 billion per year will be allocated through the Digital Europe and Horizon Europe programs. The goal is to generate a collective EU-wide investment of €20BN per year over the next decade.
An additional €140BN is earmarked for digital investments under the Next Generation EU recovery fund, with a portion dedicated to AI.
A Key Priority for the EU
Shaping rules for AI has been a central priority for EU president Ursula von der Leyen since assuming office in late 2019. The current proposal builds upon a 2018 AI strategy and a white paper published last year.
Breton emphasized that providing clear guidance for businesses will foster legal certainty and give Europe a competitive advantage.
He stated that trust is vital for the development of artificial intelligence, emphasizing the need for applications to be trustworthy, safe, and non-discriminatory. He also highlighted the importance of understanding how these applications function.
“We will be the first continent to provide guidelines – indicating what is acceptable, what requires caution, and what is prohibited. Therefore, if you want to utilize artificial intelligence applications, come to Europe! You will know what to do, how to do it, and will have partners who understand the landscape. Furthermore, you will be operating within the continent that will generate the largest amount of industrial data over the next ten years.”
Recent Developments
A draft of the proposal leaked last week prompted calls from MEPs to strengthen the plan, including a complete ban on remote biometric surveillance in public places.
While the final proposal treats remote biometric surveillance as a high-risk application with a general prohibition for law enforcement use, exceptions remain subject to legal basis and appropriate oversight.
Criticism Mounts Over Proposed AI Regulations
The European Commission’s recently unveiled proposal for regulating artificial intelligence has drawn substantial criticism. Concerns center on perceived weaknesses in protections against the use of remote biometric surveillance technologies, like facial recognition, by law enforcement agencies.
Fair Trials, a criminal justice NGO, asserts that significant improvements are essential for the regulation to provide genuine safeguards within the criminal justice system. Griff Ferris, the NGO’s legal and policy officer, stated that the EU’s proposals require substantial revisions to prevent the entrenchment of bias in criminal justice outcomes, uphold the presumption of innocence, and guarantee meaningful accountability for AI applications in this field.
Ferris further emphasized a lack of safeguards against discriminatory practices. He also noted that the broad exemption granted for ‘safeguarding public security’ significantly diminishes the limited protections offered concerning criminal justice.
The Civil Liberties Union for Europe (Liberties) also voiced objections, pointing to loopholes that could allow EU Member States to circumvent prohibitions on problematic AI applications.
Orsolya Reich, Liberties’ senior advocacy officer, cautioned against allowing uses of the technology such as algorithms predicting crime or computers evaluating emotional states at border crossings. These practices, she warned, pose serious human rights risks and threaten core EU values.
Patrick Breyer, a German Pirate MEP, argued that the proposal fails to meet the stated goal of respecting ‘European values’. He was among 40 signatories of a letter sent to the Commission last week, expressing concerns that a leaked draft did not adequately protect fundamental rights.
Breyer stated that the EU has an opportunity to align artificial intelligence with ethical principles and democratic values. However, he believes the Commission’s proposal falls short in protecting against dangers like gender injustice and unequal treatment, particularly through systems employing facial recognition or mass surveillance.
He contends that biometric and mass surveillance, along with profiling and behavioral prediction technologies, undermine freedom and threaten open societies. Breyer also expressed concern that the proposal would permit the widespread use of automatic facial recognition in public spaces, despite opposition from a majority of citizens.
European Digital Rights (Edri) highlighted a “worrying gap” in the proposal concerning discriminatory and surveillance technologies. Sarah Chander, Edri’s senior policy lead on AI, asserted that the regulation grants companies profiting from AI excessive scope for self-regulation, arguing that people, not companies, should be at the center of this regulation.
Access Now echoed these concerns, stating that the proposed prohibitions are “too limited” and the legal framework fails to prevent the development and deployment of AI applications that undermine social progress and fundamental rights.
However, Access Now did welcome transparency measures, such as the planned publicly accessible database of high-risk systems, and acknowledged that the regulation includes some prohibitions, though it believes they are insufficient.
BEUC, the consumer rights umbrella group, criticized the proposal as weak on consumer protection, focusing on regulating only a limited range of AI uses and issues.
Monique Goyens, Beuc director general, emphasized the need for consumers to trust AI in their daily lives. She argued that people should be able to trust any AI-powered product or service, regardless of its risk level, and that the EU must ensure consumers have enforceable rights and access to redress.
The legislative package also includes new rules on machinery, with adapted safety regulations to account for AI-driven changes. The Commission aims to streamline conformity assessments for businesses integrating AI into machinery.
Dot Europe, representing tech industry giants like Airbnb, Apple, and Google, welcomed the release of the Commission’s AI proposal but had not yet issued detailed remarks at the time of publication, stating they were formulating their position.
Allied For Startups cautioned against the potential for increased regulatory burdens on startups. Benedikt Blomeyer, its EU policy director, warned that the proposal could significantly increase the regulatory burden placed on startups, and emphasized the need for proportionality.
Other tech lobby groups criticized the prospect of bespoke red tape for AI, claiming the regulation could “kneecap the EU’s nascent AI industry”. The Center for Data Innovation argued that the regulation would hinder the development of the EU’s AI industry.
The CCIA trade association also warned against “unnecessary red tape”, adding that regulation alone will not establish the EU as a leader in AI.
The Commission’s proposal initiates a period of debate within the EU’s co-legislative process. The European Parliament and Member States, through the EU Council, must review the draft, potentially leading to significant changes before a final agreement is reached.
Commissioners refrained from providing a timeline for the adoption of legislation, stating only that they hoped other EU institutions would engage promptly. However, it could take several years for the regulation to be ratified and come into effect.
This report was updated with reactions to the Commission proposal, and with additional detail about the proposed enforcement structure (Article 37)
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
