LOGO

Google AI Policy Proposal: Weaker Copyright & Export Rules

March 13, 2025
Google AI Policy Proposal: Weaker Copyright & Export Rules

Google’s Response to the National AI Action Plan

In reaction to the Trump administration’s request for a national “AI Action Plan,” Google has released a policy proposal, mirroring a similar move by OpenAI. The technology leader advocates for relaxed copyright restrictions concerning AI training processes.

Furthermore, Google supports “balanced” export controls designed to safeguard national security while simultaneously facilitating U.S. exports and international business ventures.

Advocating for International Economic Policy

Google asserts that the U.S. must actively engage in international economic policy to champion American principles and foster AI innovation globally. The company believes that AI policymaking has historically overemphasized potential risks, often overlooking the detrimental effects of excessive regulation on innovation, national competitiveness, and scientific advancement.

This perspective, however, appears to be shifting with the current Administration’s approach.

Intellectual Property and AI Training

A key recommendation from Google centers on the utilization of intellectual property-protected materials. The company contends that “fair use and text-and-data mining exceptions” are critical for the progression of AI development and related scientific research.

Similar to OpenAI, Google aims to establish the right for itself and its competitors to train AI models on publicly accessible data – including copyrighted content – with minimal limitations.

Google argues these exceptions permit the use of copyrighted, publicly available material for AI training without substantially impacting the rights of copyright holders. They also help avoid protracted and often unpredictable negotiations with data owners during model development or scientific experimentation.

Ongoing Legal Challenges

Google, having reportedly utilized public, copyrighted data to train several models, is currently facing legal challenges from data owners who allege a failure to provide notification and compensation prior to such usage. The resolution of whether fair use doctrine adequately protects AI developers from intellectual property litigation remains pending in U.S. courts.

Concerns Regarding Export Controls

Google also expresses concerns about specific export controls enacted under the Biden administration. The company suggests these controls “may undermine economic competitiveness goals” by placing disproportionate burdens on U.S. cloud service providers.

This viewpoint contrasts with statements from competitors like Microsoft, who expressed confidence in their ability to fully comply with the regulations in January.

It’s important to note that the export rules include exemptions for trusted businesses seeking substantial quantities of advanced AI chips.

Investment in Research and Development

Google’s proposal calls for “long-term, sustained” investments in foundational domestic R&D, opposing recent federal efforts to reduce spending and eliminate grant awards. The company advocates for government release of datasets useful for commercial AI training.

Additionally, Google urges funding allocation to “early-market R&D” and ensuring broad accessibility of computing resources and models to scientists and institutions.

The Need for Federal AI Legislation

Highlighting the fragmented regulatory landscape created by the U.S.’ patchwork of state AI laws, Google urges the government to enact federal legislation on AI, including a comprehensive privacy and security framework. As of early 2025, the number of pending AI bills in the U.S. has reached 781, according to available tracking data.

Liability and Disclosure Concerns

Google cautions against imposing overly strict obligations on AI systems, such as usage liability requirements. The company argues that developers often have limited visibility or control over how their models are utilized and should not be held responsible for misuse.

This stance reflects Google’s historical opposition to laws like California’s SB 1047, which would have clearly defined the precautions AI developers should take before releasing a model and the circumstances under which they might be held liable for resulting harms.

Google maintains that even when a developer provides a model directly, deployers are better positioned to assess risks, implement risk management strategies, and conduct post-market monitoring.

Transparency and Trade Secrets

Google deems disclosure requirements, similar to those being considered by the EU, as “overly broad.” The company advises the U.S. government to resist transparency rules that could reveal trade secrets, enable product duplication by competitors, or compromise national security by providing adversaries with insights into circumventing protections or “jailbreaking” models.

An increasing number of countries and states are enacting laws requiring AI developers to disclose more information about their systems. California’s AB 2013, for example, mandates the publication of a high-level summary of datasets used for training. The EU’s forthcoming AI Act will require companies to provide model deployers with detailed operational instructions, limitations, and risk assessments.

#Google AI#AI policy#copyright#export rules#artificial intelligence