Microsoft Unveils Phi 4 AI Models for Edge Computing
ByAinvest
Wednesday, Apr 30, 2025 11:31 pm ET1min read
MSFT--
The models were trained using synthetic math problems, high-quality web data, and curated demonstrations. Phi-4 reasoning plus approaches the performance levels of R1, a model with significantly more parameters, and matches o3-mini on OmniMath, a math skills test. This indicates that smaller models can achieve high levels of performance when trained effectively.
The Phi-4 mini reasoning model is particularly notable for its efficiency, with a size of just 3.8 billion parameters. It supports a context length of 128K tokens, making it suitable for environments with constrained computing or latency. This model is ideal for educational applications, embedded tutoring, and lightweight deployment on edge or mobile systems.
The Phi-4 reasoning model is a solid all-rounder, capable of handling a wide range of tasks, including math, coding, and planning. It uses custom think tags to structure logical blocks and has a context window of 32K tokens, allowing it to process longer documents.
The Phi-4 reasoning plus model builds upon the capabilities of the base model, using reinforcement learning to deliver higher accuracy. It is particularly strong in mathematical reasoning and can handle longer reasoning traces, although this comes at the cost of increased inference time.
These models are part of Microsoft's ongoing effort to advance and democratize artificial intelligence through open source and open science. They are designed to be small and efficient, making them accessible to a wide range of users and devices.
References:
[1] https://medium.com/data-science-in-your-pocket/phi-4-reasoning-microsofts-new-llms-are-smarter-faster-free-er-a477e832aae8
[2] https://azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/
PHI--
Microsoft has released three new open AI models, Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus, which are competitive with OpenAI's o3-mini on at least one benchmark. Phi 4 reasoning plus approaches the performance levels of R1, a model with significantly more parameters, and matches o3-mini on OmniMath, a math skills test. The models were trained using synthetic math problems, high-quality web data, and curated demonstrations. They are available on the AI dev platform Hugging Face.
Microsoft has recently introduced three new open AI models, Phi-4 mini reasoning, Phi-4 reasoning, and Phi-4 reasoning plus, which are competitive with OpenAI's o3-mini on at least one benchmark. These models were designed to excel in mathematical reasoning and are available on the AI development platform Hugging Face.The models were trained using synthetic math problems, high-quality web data, and curated demonstrations. Phi-4 reasoning plus approaches the performance levels of R1, a model with significantly more parameters, and matches o3-mini on OmniMath, a math skills test. This indicates that smaller models can achieve high levels of performance when trained effectively.
The Phi-4 mini reasoning model is particularly notable for its efficiency, with a size of just 3.8 billion parameters. It supports a context length of 128K tokens, making it suitable for environments with constrained computing or latency. This model is ideal for educational applications, embedded tutoring, and lightweight deployment on edge or mobile systems.
The Phi-4 reasoning model is a solid all-rounder, capable of handling a wide range of tasks, including math, coding, and planning. It uses custom think tags to structure logical blocks and has a context window of 32K tokens, allowing it to process longer documents.
The Phi-4 reasoning plus model builds upon the capabilities of the base model, using reinforcement learning to deliver higher accuracy. It is particularly strong in mathematical reasoning and can handle longer reasoning traces, although this comes at the cost of increased inference time.
These models are part of Microsoft's ongoing effort to advance and democratize artificial intelligence through open source and open science. They are designed to be small and efficient, making them accessible to a wide range of users and devices.
References:
[1] https://medium.com/data-science-in-your-pocket/phi-4-reasoning-microsofts-new-llms-are-smarter-faster-free-er-a477e832aae8
[2] https://azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/

Stay ahead of the market.
Get curated U.S. market news, insights and key dates delivered to your inbox.
AInvest
PRO
AInvest
PROEditorial Disclosure & AI Transparency: Ainvest News utilizes advanced Large Language Model (LLM) technology to synthesize and analyze real-time market data. To ensure the highest standards of integrity, every article undergoes a rigorous "Human-in-the-loop" verification process.
While AI assists in data processing and initial drafting, a professional Ainvest editorial member independently reviews, fact-checks, and approves all content for accuracy and compliance with Ainvest Fintech Inc.’s editorial standards. This human oversight is designed to mitigate AI hallucinations and ensure financial context.
Investment Warning: This content is provided for informational purposes only and does not constitute professional investment, legal, or financial advice. Markets involve inherent risks. Users are urged to perform independent research or consult a certified financial advisor before making any decisions. Ainvest Fintech Inc. disclaims all liability for actions taken based on this information. Found an error?Report an Issue

Comments
No comments yet