Google AI Model Report: Safety Concerns Raised by Experts

Google's Gemini 2.5 Pro Safety Report Draws Scrutiny
Following the release of its advanced AI model, Gemini 2.5 Pro, Google recently issued a technical report detailing internal safety assessments. However, several experts have expressed concerns regarding the report’s lack of comprehensive detail, hindering a thorough evaluation of potential risks.
The Value of Technical Reporting in AI Development
Technical reports are considered valuable resources within the AI community. They often reveal crucial information, even aspects that companies might not proactively publicize, supporting independent research and bolstering safety evaluations.
Google distinguishes itself from some competitors by releasing technical reports only after a model is deemed to have moved beyond the “experimental” phase. Furthermore, not all findings from evaluations concerning “dangerous capabilities” are included in these reports; those are reserved for separate audits.
Concerns Regarding Report Specificity
Despite Google’s approach, experts interviewed by TechCrunch voiced disappointment with the limited information presented in the Gemini 2.5 Pro report. A key omission noted was a detailed discussion of Google’s proposed Frontier Safety Framework (FSF).
The FSF was introduced last year as a means of proactively identifying potential AI capabilities that could lead to significant harm.
Expert Perspectives on Transparency
Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, stated that the report’s sparseness and delayed release – weeks after public availability – make it impossible to verify Google’s adherence to its stated commitments. This, in turn, impedes a proper assessment of the models’ safety and security.
Thomas Woodside, co-founder of the Secure AI Project, acknowledged the report’s release but questioned Google’s dedication to providing timely, supplementary safety evaluations. He highlighted that the last published results of dangerous capability tests were in June 2024, concerning a model announced in February of the same year.
Lack of Reports for Other Models
Adding to the concerns, Google has not yet released a report for Gemini 2.5 Flash, a smaller and more efficient model announced recently. A company spokesperson indicated that a report for Flash is “coming soon.”
Woodside expressed hope that this signifies a commitment to more frequent updates, including evaluations of models before public deployment, as these too could present substantial risks.
Industry-Wide Trends in Transparency
Google isn’t alone in facing criticism regarding transparency. Meta released a similarly concise safety evaluation for its new Llama 4 open models, and OpenAI chose not to publish a report for its GPT-4.1 series.
Regulatory Commitments and Accountability
Google has previously made assurances to regulators regarding a high standard of AI safety testing and reporting. Two years ago, the company pledged to publish safety reports for all “significant” public AI models “within scope” to the U.S. government.
Similar commitments were made to other countries, promising “public transparency” surrounding AI products.
A "Race to the Bottom" on AI Safety?
Kevin Bankston, a senior adviser on AI governance at the Center for Democracy and Technology, characterized the trend of infrequent and vague reports as a “race to the bottom” on AI safety.
He noted that, alongside reports of reduced safety testing timelines at competing labs like OpenAI, the limited documentation for Google’s flagship AI model suggests a concerning prioritization of speed to market over thorough safety and transparency.
Google's Ongoing Safety Measures
Google maintains that it conducts comprehensive safety testing and “adversarial red teaming” for models prior to release, even if these details aren’t fully elaborated upon in its technical reports.
Updated 4/22 at 12:58 p.m. Pacific: Language was modified to clarify the technical report’s reference to Google’s FSF.
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
