Anthropic CEO Criticizes DeepSeek's Bioweapons Data Safety Test Performance

Concerns Raised Regarding DeepSeek AI Model
Dario Amodei, CEO of Anthropic, has expressed apprehension concerning DeepSeek, a Chinese AI firm that has rapidly gained prominence within Silicon Valley due to its R1 model. These worries extend beyond typical data security concerns related to data transmission to China.
Bioweapons Information Generation
During an appearance on the ChinaTalk podcast hosted by Jordan Schneider, Amodei revealed that DeepSeek generated highly sensitive information pertaining to bioweapons during a safety evaluation conducted by Anthropic.
Amodei asserted that DeepSeek’s performance was demonstrably the poorest of any model his company had previously assessed. Specifically, the model exhibited a complete lack of safeguards against producing this type of information.
National Security Risk Assessments
These evaluations, Amodei explained, are routinely performed by Anthropic on various AI models to determine potential risks to national security. The team focuses on the models’ capacity to generate information related to bioweapons that is not readily available through standard search engines or academic resources.
Anthropic actively promotes itself as a provider of foundational AI models with a strong commitment to safety protocols.
Potential for Future Danger
While Amodei currently doesn’t believe DeepSeek’s models pose an immediate threat by disseminating rare and dangerous knowledge, he anticipates this could change in the near future. He acknowledged the talent of DeepSeek’s engineering team but urged the company to prioritize AI safety considerations.
Amodei has previously voiced support for stringent export controls on advanced chips destined for China, citing the potential for these technologies to enhance China’s military capabilities.
Limited Test Details
Amodei did not specify which DeepSeek model was subjected to Anthropic’s testing during the ChinaTalk interview, nor did he elaborate on the technical specifics of the evaluation process. Requests for comment from TechCrunch directed to both Anthropic and DeepSeek went unanswered.
Cisco’s Safety Test Results
Concerns regarding DeepSeek’s safety have surfaced elsewhere as well. Cisco security researchers recently reported that DeepSeek R1 failed to block any harmful prompts during their safety tests, resulting in a 100% success rate for jailbreaking the system.
Although Cisco’s findings did not involve bioweapons, they did demonstrate the model’s ability to generate harmful information related to cybercrime and other illicit activities. It is important to note that other models, such as Meta’s Llama-3.1-405B and OpenAI’s GPT-4o, also exhibited high failure rates of 96% and 86%, respectively.
Adoption and Bans
The impact of these safety concerns on DeepSeek’s growing adoption rate remains uncertain. Notably, companies like AWS and Microsoft have publicly announced plans to integrate R1 into their cloud platforms – a development that is somewhat ironic given Amazon’s substantial investment in Anthropic.
Conversely, a growing number of nations, corporations, and governmental bodies, including the U.S. Navy and the Pentagon, have begun to prohibit the use of DeepSeek.
A New Competitor Emerges
Whether these restrictions gain widespread acceptance or DeepSeek continues its global expansion remains to be seen. Regardless, Amodei views DeepSeek as a significant new competitor, comparable to leading U.S. AI companies.
“The key development is the emergence of a new competitor,” Amodei stated on ChinaTalk. “Among the companies capable of training advanced AI – including Anthropic, OpenAI, Google, potentially Meta and xAI – DeepSeek is now arguably joining that group.”
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
