EU AI Act: Latest Draft & Key Changes

EU AI Act: Latest Code of Practice Draft Released
A third draft of the Code of Practice designed to assist providers of general purpose AI (GPAI) models in adhering to the stipulations of the EU AI Act was released on Tuesday. The final deadline for finalizing this guidance is set for May.
Development of this Code began last year, and it is anticipated that this iteration represents the concluding version before formalization.
Enhanced Accessibility and Feedback
To improve access to the Code, a dedicated website has been launched. This platform serves as a central hub for information and resources related to the guidelines.
Stakeholders are invited to submit written feedback on the current draft. The deadline for providing input is March 30, 2025.
Obligations for Powerful AI Models
The EU’s AI Act employs a risk-based framework, imposing specific obligations on developers of the most advanced AI models. These obligations encompass crucial areas like transparency, copyright protection, and risk mitigation strategies.
The Code of Practice is intended to clarify how GPAI model creators can fulfill their legal responsibilities and thereby avoid potential penalties for failing to comply with the Act.
Potential Penalties for Non-Compliance
Violations of the GPAI requirements outlined in the AI Act could result in substantial financial penalties. These penalties may amount to as much as 3% of a company’s total global annual revenue.
Therefore, understanding and implementing the guidance provided within the Code of Practice is critical for all GPAI model providers operating within the European Union.
Revised Code Structure
The newest version of the Code is presented as having a more efficient organization, featuring clarified obligations and standards. This revision builds upon feedback received on the second draft, which was released in December.
Ongoing feedback, collaborative discussions within working groups, and workshops will contribute to the development of the third draft into finalized guidance. Experts aim to achieve enhanced “clarity and coherence” in the ultimately adopted version of the Code.
Key Sections of the Draft
The draft is divided into several sections, outlining commitments for Generative AI providers (GPAIs). It also includes comprehensive guidance regarding transparency and copyright protocols.
A dedicated section addresses safety and security responsibilities, specifically applicable to the most potent models – those identified as posing systemic risk (GPAISR).
Transparency Measures
Regarding transparency, the guidance incorporates an example of a model documentation form. GPAIs may be required to complete this form to ensure downstream users of their technology have access to crucial information for their own compliance efforts.
Copyright Considerations
The copyright section is likely to remain a particularly sensitive area for major AI companies.
The current draft frequently employs terms such as “best efforts,” “reasonable measures,” and “appropriate measures” when discussing adherence to commitments. These include respecting rights requirements during web data acquisition for model training and minimizing the generation of copyright-infringing content.
The prevalence of such qualifying language suggests that large-scale data-mining AI firms may perceive considerable flexibility in continuing to acquire protected data for model training, potentially seeking forgiveness later. However, it remains uncertain whether this language will be strengthened in the final Code draft.
Changes to Rightsholder Communication
A previous iteration of the Code stipulated that GPAIs should establish a single point of contact for complaint handling, facilitating direct and swift communication with rightsholders. This provision appears to have been modified.
The current text simply states: “Signatories will designate a point of contact for communication with affected rightsholders and provide easily accessible information about it.”
Responding to Copyright Complaints
The current draft also indicates that GPAIs may be able to dismiss copyright complaints from rightsholders if they are deemed “manifestly unfounded or excessive,” particularly due to their repetitive nature.
This suggests that attempts by creators to leverage AI tools for copyright detection and automated complaint filing against large AI companies could be disregarded.
Safety and Security Refinements
The EU AI Act’s systemic risk requirements currently apply only to the most powerful models – those trained using a total computing power exceeding 10^25 FLOPs. This latest draft features some previously recommended safety measures that have been further refined in response to feedback.
US Influence on European AI Regulation
The recent EU press release regarding the latest draft legislation omits significant criticism directed at European lawmaking, particularly its regulations concerning AI. These critiques originate from the U.S. administration under President Donald Trump.
During last month’s Paris AI Action summit, U.S. Vice President JD Vance expressed skepticism regarding the necessity of regulating AI safety. The Trump administration intends to prioritize capitalizing on the “AI opportunity” instead. He cautioned Europe that excessive regulation could stifle innovation.
Following this, the EU has decided to discontinue one AI safety initiative – specifically, the AI Liability Directive is under consideration for removal. EU legislators are also preparing an “omnibus” package of reforms designed to streamline existing regulations. This aims to lessen administrative burdens for businesses, with a focus on areas like sustainability reporting.
However, with the AI Act still undergoing implementation, it is evident that external pressure is being exerted to weaken its stipulations.
At the Mobile World Congress in Barcelona, Mistral, a French GPAI model developer and vocal opponent of the EU AI Act during 2023 negotiations, voiced concerns. Founder Arthur Mensh stated the company is facing challenges in identifying technological solutions to meet certain regulatory requirements.
He further indicated that Mistral is collaborating with regulators to resolve these issues.
While a GPAI Code of Conduct is being developed by independent experts, the European Commission – through the AI Office responsible for enforcement – is concurrently creating supplementary “clarifying” guidance. This guidance will define GPAIs and their associated responsibilities.
Therefore, anticipate further guidance from the AI Office, expected “in due time,” as it may provide a means for hesitant lawmakers to address U.S. lobbying efforts advocating for AI deregulation. This guidance will serve to clarify the extent of the regulations.
Related Posts

Google's New AI Agent vs. OpenAI GPT-5.2: A Deep Dive

Disney Cease and Desist: Google Faces Copyright Infringement Claim

OpenAI Responds to Google with GPT-5.2 After 'Code Red' Memo

Google Disco: Build Web Apps from Browser Tabs with Gemini

Waymo Baby Delivery: Birth in Self-Driving Car
