DeepSeek Model Now Available on Microsoft Azure

Microsoft and DeepSeek: A Complex Relationship
Despite allegations of potential intellectual property theft and terms of service violations leveled against DeepSeek by OpenAI, a close partner of Microsoft, the tech giant is proceeding with integrating DeepSeek’s models into its cloud infrastructure.
R1 Model Now Available on Azure AI Foundry
Microsoft announced on Wednesday the availability of R1, DeepSeek’s reasoning model, through its Azure AI Foundry service. This platform consolidates various AI services designed for enterprise use.
The company emphasized that the R1 version offered on Azure AI Foundry has been subjected to thorough red teaming and safety evaluations. These included automated assessments of model behavior and comprehensive security reviews to minimize potential risks.
Future Integration with Copilot+ PCs
Microsoft plans to enable customers to utilize “distilled” versions of R1 for local execution on Copilot+ PCs in the coming months. These PCs represent Microsoft’s line of Windows hardware specifically engineered for AI capabilities.
Microsoft expressed enthusiasm about the potential of developers and enterprises to utilize R1 to address real-world problems and create innovative experiences.
Investigation into Potential Data Exfiltration
This move is particularly noteworthy given reports that Microsoft initiated an investigation into DeepSeek’s possible misuse of its and OpenAI’s services. Security researchers at Microsoft suspect DeepSeek may have extracted a substantial amount of data via OpenAI’s API in the autumn of 2024.
As OpenAI’s primary investor, Microsoft alerted OpenAI to this concerning activity, as reported by Bloomberg.
R1's Appeal Despite Controversy
The considerable attention surrounding R1 likely influenced Microsoft’s decision to incorporate it into its cloud offerings, even amidst the ongoing investigation.
Accuracy and Censorship Concerns
It remains uncertain whether Microsoft implemented any modifications to enhance R1’s accuracy or address its censorship tendencies. NewsGuard’s testing revealed that R1 delivers inaccurate or evasive responses 83% of the time when queried about current events.
Furthermore, a separate assessment indicated that R1 declines to answer 85% of prompts pertaining to China, potentially due to the censorship policies affecting AI models developed within that nation.
Stay informed with TechCrunch’s AI newsletter! Subscribe here to receive it weekly in your inbox.
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
