LOGO

Microsoft Phi-4: New AI Model Rivals Larger Systems

May 1, 2025
Microsoft Phi-4: New AI Model Rivals Larger Systems

Microsoft Introduces New Open-Source AI Models

On Wednesday, Microsoft unveiled a series of new, openly accessible AI models. The most advanced of these demonstrates competitive capabilities against OpenAI’s o3-mini, as evidenced by performance on at least one key benchmark.

Focus on Reasoning Capabilities

These newly released models – Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus – are specifically designed as “reasoning” models. This means they are engineered to dedicate more processing time to verifying the accuracy of solutions to intricate problems.

The models represent an expansion of Microsoft’s Phi family of “small models.” This family was initially launched last year to provide a foundational resource for AI developers creating applications for edge computing environments.

Phi 4 Mini Reasoning: Details

Phi 4 mini reasoning was trained utilizing approximately 1 million synthetic mathematical problems. These problems were generated by the R1 reasoning model developed by the Chinese AI company, DeepSeek.

With around 3.8 billion parameters, Phi 4 mini reasoning is optimized for educational uses. Microsoft suggests applications such as “embedded tutoring” systems on devices with limited processing power.

Generally, the number of parameters within a model correlates with its problem-solving abilities. Models with a greater number of parameters typically exhibit superior performance compared to those with fewer.

Phi 4 Reasoning and Phi 4 Reasoning Plus

Phi 4 reasoning, featuring 14 billion parameters, was trained using both “high-quality” data sourced from the web and “curated demonstrations” originating from OpenAI’s o3-mini.

According to Microsoft, this model is particularly well-suited for applications involving mathematics, scientific inquiry, and computer coding.

Phi 4 reasoning plus is an adaptation of Microsoft’s previously released Phi 4 model. It has been refined to function as a reasoning model, achieving enhanced accuracy for specific tasks.

Microsoft asserts that Phi 4 reasoning plus achieves performance levels comparable to DeepSeek R1, a model with a significantly larger parameter count (671 billion). Internal benchmarks also indicate that Phi 4 reasoning plus matches the performance of o3-mini on the OmniMath test, which assesses mathematical skills.

Availability and Accessibility

Phi 4 mini reasoning, Phi 4 reasoning, Phi 4 reasoning plus, and their corresponding technical documentation are readily available on the AI development platform, Hugging Face.

Balancing Size and Performance

“These [new] models balance size and performance through the use of distillation, reinforcement learning, and high-quality data,” Microsoft explained in a blog post.

They are sufficiently small to enable low-latency operation, while still maintaining robust reasoning capabilities that rival those of much larger models. This combination allows even devices with limited resources to efficiently handle complex reasoning tasks.

#Microsoft#Phi-4#AI model#artificial intelligence#large language model#LLM