![SLMs]()
One year after launching its first small language models (SLMs) with Phi-3, Microsoft has announced a major leap forward with the introduction of Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning. These new models, available on Azure AI Foundry and Hugging Face, mark a new era for SLMs, bringing advanced reasoning capabilities to smaller, more efficient AI systems.
The Rise of Reasoning Models
Traditionally, complex reasoning tasks such as mathematical problem-solving, multi-step decomposition, and internal reflection were the domain of large, resource-intensive AI models. Microsoft’s new Phi-4 reasoning models challenge this paradigm. By leveraging advanced training techniques, including distillation, reinforcement learning, and careful data curation, these models deliver high-level reasoning performance in a compact footprint, making them suitable for low-latency and resource-constrained environments.
Phi-4-Reasoning and Phi-4-Reasoning-Plus: Compact Yet Powerful
Phi-4-reasoning is a 14-billion parameter open-weight reasoning model, meticulously fine-tuned on curated datasets and reasoning demonstrations, including those from OpenAI’s o3-mini. This model excels in generating detailed reasoning chains and demonstrates that with the right data and training, smaller models can rival or even surpass much larger counterparts on complex tasks such as mathematical reasoning and Ph.D.-level science questions.
Phi-4-reasoning-plus builds on this foundation, using reinforcement learning and a broader token set to achieve even higher accuracy. Despite their smaller size, both models outperform OpenAI o1-mini and DeepSeek-R1-Distill-Llama-70B on most benchmarks, including the AIME 2025 test for the USA Math Olympiad. Notably, Phi-4-reasoning-plus approaches the performance of DeepSeek-R1, a model with 671 billion parameters, on several reasoning benchmarks- a remarkable achievement for a model of its size.
![Phi4]()
Phi-4-Mini-Reasoning: Advanced Reasoning for Lightweight Devices
For applications where computational resources and latency are critical, Phi-4-mini-reasoning offers a compact, transformer-based model optimized for mathematical reasoning. With around 3.8 billion parameters and trained on over one million diverse math problems (including synthetic data from DeepSeek-R1), Phi-4-mini-reasoning is ideal for educational tools, embedded tutoring, and mobile or edge deployment. It delivers step-by-step problem-solving capabilities across a wide range of difficulty levels, from middle school to Ph.D. math.
![Mini]()
Real-World Integration: Phi Models Across Microsoft’s Ecosystem
Phi models are already making an impact across Microsoft’s product ecosystem. On Windows 11 devices, these models can run locally on CPUs and GPUs, and the NPU-optimized Phi Silica variant is now a core part of Copilot+ PCs. This version is designed for high efficiency, fast response, and power savings, enabling features like “Click to Do” and Copilot summary tools in applications such as Outlook-even while offline. Developers can also access these models via APIs to integrate advanced reasoning into their own applications, with further optimizations for Copilot+ PC NPUs on the horizon.
![Phi Models]()
Responsible AI: Safety, Transparency, and Trust
Microsoft emphasizes responsible AI development as a core principle behind the Phi family. The models are developed with robust safety post-training, using supervised fine-tuning, direct preference optimization, and reinforcement learning from human feedback. This approach leverages diverse datasets focused on helpfulness, harmlessness, and safety to minimize risks and ensure reliability. Azure AI Foundry further supports developers with content safety features, prompt shielding, and real-time monitoring for quality and adversarial threats, helping maintain trust and transparency in AI deployments.
Setting a New Standard for Small Language Models
With the release of Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning, Microsoft is pushing the boundaries of what’s possible with small language models. These models deliver performance that rivals much larger systems, making advanced reasoning accessible on a wide range of devices and applications. As the Phi family continues to evolve, it sets a new standard for efficient, safe, and powerful AI for developers and users alike.
Explore the new Phi-4 models today on Azure AI Foundry and Hugging Face, and read the technical report for detailed benchmarks and insights into Microsoft’s latest advancements in small language models.