OpenAI Releases GPT-4.1 Without Safety Report - Concerns Rise

OpenAI Launches GPT-4.1 Without Standard Safety Report
On Monday, OpenAI introduced its latest AI model family, GPT-4.1. The company asserts that this new iteration demonstrates improved performance over some of its pre-existing models, specifically in programming benchmarks.
Absence of a System Card
Notably, GPT-4.1 was released without the customary safety report, often referred to as a model or system card, which typically accompanies OpenAI’s model launches. As of Tuesday morning, OpenAI had not yet made a safety report available for GPT-4.1, and indications suggest they do not intend to do so.
Shaokyi Amdo, an OpenAI spokesperson, communicated to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”
The Importance of Safety Reports
It is common practice for AI laboratories to publish safety reports. These reports detail the testing procedures, both internal and conducted with external partners, used to assess model safety.
These reports can sometimes reveal concerning findings, such as a model’s tendency towards deception or its potential for undue influence. The AI community generally views these reports as sincere attempts to encourage independent research and rigorous testing.
Declining Reporting Standards
Over recent months, a trend has emerged among leading AI labs to reduce the thoroughness of their reporting. This has drawn criticism from safety researchers.
Some organizations, like Google, have been slow to release safety reports, while others have published reports lacking the usual level of detail.
OpenAI’s Recent History
OpenAI’s own recent record has also faced scrutiny. In December, the company received criticism for publishing a safety report with benchmark results that did not align with the deployed model version.
Furthermore, last month, OpenAI launched a model, designated for deep research, several weeks before releasing the corresponding system card.
Voluntary Transparency
Steven Adler, a former OpenAI safety researcher, pointed out to TechCrunch that safety reports are not legally required; they are voluntary. However, OpenAI has previously made commitments to governments to enhance transparency regarding its models.
Prior to the U.K. AI Safety Summit in 2023, OpenAI described system cards as “a key part” of its accountability framework in a blog post. Leading up to the Paris AI Action Summit in 2025, OpenAI stated that system cards offer valuable insights into a model’s potential risks.
“System cards are the AI industry’s main tool for transparency and for describing what safety testing was done,” Adler explained to TechCrunch via email. “Today’s transparency norms and commitments are ultimately voluntary, so it is up to each AI company to decide whether or when to release a system card for a given model.”
Concerns Regarding Safety Practices
The release of GPT-4.1 without a system card coincides with growing concerns voiced by current and former employees regarding OpenAI’s safety protocols.
Last week, Adler, along with eleven other ex-OpenAI employees, submitted a proposed amicus brief in Elon Musk’s legal case against OpenAI. Their argument centered on the possibility that a for-profit OpenAI might compromise on safety measures.
Recent reporting by the Financial Times indicates that competitive pressures have led ChatGPT’s creator to reduce the time and resources dedicated to safety testing.
Performance and Risk
While GPT-4.1 may not be OpenAI’s most powerful AI model, it does offer significant improvements in efficiency and speed. Thomas Woodside, co-founder and policy analyst at Secure AI Project, conveyed to TechCrunch that these performance gains actually underscore the importance of a safety report.
He emphasized that the more advanced a model becomes, the greater the potential risks it may present.
Resistance to Regulation
Numerous AI labs have actively opposed attempts to legally mandate safety reporting requirements. For instance, OpenAI opposed California’s SB 1047, which would have required many AI developers to audit and publish safety evaluations for publicly available models.
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
