LOGO

Why OpenAI Isn't Releasing Deep Research via API - Yet

February 25, 2025
Why OpenAI Isn't Releasing Deep Research via API - Yet

OpenAI Pauses API Release of Deep Research Model Due to Persuasion Risks

OpenAI has announced a delay in making its deep research AI model available through its developer API. This decision stems from a need for more thorough evaluation of potential risks associated with AI's ability to influence beliefs and actions.

Clarification on Whitepaper Wording

The company clarified that initial wording in a published whitepaper incorrectly implied a direct link between their persuasion research and the API release plans. OpenAI has since updated the whitepaper to reflect that these are separate considerations.

Assessing Real-World Persuasion Risks

OpenAI is currently refining its methods for identifying “real-world persuasion risks,” such as the large-scale dissemination of false information. This proactive step aims to mitigate potential misuse of the technology.

Despite the model’s high computational demands and relatively slow processing speed, which make it less suitable for widespread misinformation campaigns, OpenAI intends to investigate how AI could personalize harmful persuasive content.

Deployment Limited to ChatGPT

“While we work to reconsider our approach to persuasion, we are only deploying this model in ChatGPT, and not the API,” OpenAI stated. This ensures controlled access during the risk assessment phase.

Growing Concerns About AI-Driven Misinformation

There is increasing apprehension regarding AI’s role in spreading false or misleading information with malicious intent. Last year witnessed a surge in political deepfakes globally.

For instance, during Taiwan’s election, a group linked to the Chinese Communist Party circulated AI-generated audio falsely portraying a politician endorsing a pro-China candidate.

Rise in Social Engineering Attacks

AI is also being exploited in increasingly sophisticated social engineering attacks. Consumers are being deceived by celebrity deepfakes promoting fraudulent investment schemes.

Furthermore, corporations have been defrauded of substantial sums by deepfake impersonators. These incidents highlight the urgent need for robust safeguards.

Deep Research Model Performance

OpenAI’s whitepaper details the results of several tests evaluating the deep research model’s persuasive capabilities. This model is a specialized version of the recently unveiled o3 “reasoning” model, optimized for web browsing and data analysis.

In tests involving persuasive argument generation, the deep research model outperformed other OpenAI models, though it did not surpass human performance. It also demonstrated superior performance compared to other models when attempting to persuade another AI (GPT-4o) to make a payment.

why openai isn’t bringing deep research to its api just yetAreas for Improvement

However, the deep research model did not excel in all persuasiveness tests. It proved less effective than GPT-4o itself at convincing GPT-4o to reveal a codeword.

OpenAI acknowledges that the test results likely represent a conservative estimate of the model’s potential. They believe that further refinement and improved techniques could significantly enhance its performance.

Competitor Launches Similar Product

Meanwhile, at least one competitor is moving forward with a similar offering. Perplexity announced the launch of Deep Research within its Sonar developer API, powered by a customized version of the R1 model from Chinese AI lab DeepSeek.

OpenAI has been contacted for further comment and this article will be updated if more information becomes available.

#OpenAI#API#AI research#deep learning#artificial intelligence#safety