OpenAI Launches O3 and O4-Mini AI Reasoning Models

OpenAI Launches Advanced AI Reasoning Models: o3 and o4-mini
On Wednesday, OpenAI unveiled o3 and o4-mini, representing a new generation of AI models engineered to meticulously analyze questions before formulating responses.
Introducing o3 and o4-mini
The company designates o3 as its most sophisticated reasoning model to date. It demonstrates superior performance compared to prior OpenAI models across a spectrum of evaluations, including mathematics, coding, logical reasoning, scientific understanding, and visual comprehension.
Conversely, o4-mini presents a compelling balance between cost, speed, and overall performance – key considerations for developers when selecting an AI model for their applications.
Enhanced Capabilities with Tool Use
Distinguishing themselves from earlier reasoning models, both o3 and o4-mini are capable of leveraging tools within ChatGPT. These include functionalities like web browsing, Python code execution, image analysis, and image creation.
These models, along with a refined version of o4-mini termed “o4-mini-high” – designed for increased answer reliability through extended processing time – are now accessible to those subscribed to OpenAI’s Pro, Plus, and Team plans.
The Competitive AI Landscape
These new models are integral to OpenAI’s strategy to maintain a leading position against competitors such as Google, Meta, xAI, Anthropic, and DeepSeek in the rapidly evolving AI industry.
While OpenAI pioneered AI reasoning models with o1, rivals swiftly introduced comparable or even more advanced versions. Consequently, reasoning models have become increasingly dominant as AI developers strive to maximize system performance.
A Shift in Development Focus
The release of o3 was nearly averted. Sam Altman, OpenAI’s CEO, previously indicated a prioritization of a more advanced alternative incorporating o3’s core technology. However, heightened competition ultimately led OpenAI to reconsider this decision.
Performance Benchmarks
OpenAI reports that o3 achieves state-of-the-art results on SWE-bench verified (without custom scaffolding), a benchmark for coding proficiency, achieving a score of 69.1%. The o4-mini model closely follows, scoring 68.1%.
For comparison, OpenAI’s o3-mini scored 49.3% on the same test, while Claude 3.7 Sonnet attained a score of 62.3%.
“Thinking with Images”
OpenAI asserts that o3 and o4-mini represent the company’s first models capable of “reasoning with images.” Users can now upload images – such as whiteboard sketches or diagrams extracted from PDFs – to ChatGPT.
The models will then analyze these images during their “chain-of-thought” process before providing answers. This allows them to interpret blurry or low-resolution images and even perform operations like zooming or rotating images during reasoning.
Expanded Functionality
Beyond image processing, o3 and o4-mini can execute Python code directly within a browser using ChatGPT’s Canvas feature. They can also access current information by searching the web.
Availability via API
In addition to ChatGPT integration, all three models – o3, o4-mini, and o4-mini-high – will be accessible through OpenAI’s developer APIs, specifically the Chat Completions API and Responses API.
This allows engineers to integrate these models into their applications on a usage-based pricing model.
Pricing Structure
OpenAI is offering o3 at a competitive price of $10 per million input tokens (approximately 750,000 words – exceeding the length of the Lord of the Rings series) and $40 per million output tokens.
o4-mini is priced similarly to o3-mini, at $1.10 per million input tokens and $4.40 per million output tokens.
Future Plans: o3-pro
OpenAI intends to release o3-pro in the coming weeks. This version will utilize greater computational resources to enhance answer quality and will be exclusive to ChatGPT Pro subscribers.
Looking Ahead to GPT-5
Sam Altman, CEO of OpenAI, suggests that o3 and o4-mini may be the final dedicated AI reasoning models within ChatGPT before the arrival of GPT-5.
GPT-5 is anticipated to unify traditional models, such as GPT-4.1, with the capabilities of the current reasoning models.
Related Posts

Disney Cease and Desist: Google Faces Copyright Infringement Claim

OpenAI Responds to Google with GPT-5.2 After 'Code Red' Memo

Waymo Baby Delivery: Birth in Self-Driving Car

Google AI Leadership: Promoting Data Center Tech Expert
