LOGO

Meta Refuses to Sign EU AI Code of Practice - Latest News

July 18, 2025
Meta Refuses to Sign EU AI Code of Practice - Latest News

Meta Declines to Endorse EU AI Code of Practice

Meta has opted not to sign the European Union’s code of practice related to its AI Act. This decision comes just weeks before the EU’s regulations governing providers of general-purpose AI models are scheduled to become enforceable.

Concerns Regarding the EU’s Approach

Joel Kaplan, Meta’s chief global affairs officer, articulated the company’s position in a LinkedIn post. He stated that “Europe is heading down the wrong path on AI.” Meta’s review of the European Commission’s Code of Practice for general-purpose AI (GPAI) models led to the conclusion that signing would not be beneficial.

Kaplan explained that the Code introduces ambiguities from a legal standpoint for those developing AI models. Furthermore, he believes certain measures within the Code extend beyond the intended scope of the AI Act itself.

Details of the EU Code of Practice

The EU’s code of practice, a voluntary framework released earlier this month, is designed to assist companies in establishing processes and systems for adherence to the EU’s AI regulations.

Key requirements within the code include:

  • Providing and consistently updating documentation concerning their AI tools and services.
  • Prohibiting the use of pirated content in the training of AI models.
  • Complying with requests from content owners to exclude their work from AI datasets.

Criticism of the AI Act’s Implementation

Kaplan characterized the EU’s implementation of the legislation as an “overreach.” He contends that the law will hinder both the development and deployment of cutting-edge AI models within Europe.

He also suggests it will negatively impact European companies aiming to build businesses leveraging these advanced AI technologies.

Overview of the AI Act

The AI Act is a risk-based regulation focused on the application of artificial intelligence. It outright bans certain “unacceptable risk” applications, such as cognitive behavioral manipulation and social scoring systems.

The Act also identifies specific “high-risk” applications, including biometrics, facial recognition, and uses within sectors like education and employment. Developers are required to register AI systems and fulfill obligations related to risk and quality management.

Industry Opposition and EU Response

Numerous tech companies globally, including leaders in the AI field like Alphabet, Microsoft, and Mistral AI, have actively opposed the rules. They have even appealed to the European Commission to postpone its implementation.

However, the Commission has remained resolute, affirming its commitment to the established timeline.

Upcoming Deadlines and Guidelines

On Friday, the EU also released guidelines for AI model providers in anticipation of the rules taking effect on August 2nd. These rules specifically target providers of “general-purpose AI models with systemic risk,” such as OpenAI, Anthropic, Google, and Meta.

Companies already offering such models before August 2nd will be required to comply with the legislation by August 2, 2027.

#Meta#AI Code of Practice#EU#artificial intelligence#regulation#tech news