Destination

2025-07-12

Microsoft introduces Phi-4-mini-flash-reasoning with up to 10x higher token throughput

Announcement banner for Phi-4-mini-flash-reasoning in front of abstract color gradient with P-logo bottom right


Microsoft has introduced Phi-4-mini-flash-reasoning, a lightweight AI model built for scenarios with tight computing, memory, or latency limits. Designed for edge devices and mobile apps, the model aims to deliver strong reasoning abilities without demanding hardware.


The article Discover Copy

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat

2025-11-17

Phi-4 proves that a 'data-first' SFT methodology is the new differentiator

AI engineers often chase performance by scaling up LLM parameters and data, but the trend toward smaller, more efficient, and better-focused models has accelerated. The Phi-4 fine-tuning methodology [...]

Match Score: 641.42

Destination

2025-05-27

How Phi-4-Reasoning Redefines AI Reasoning by Challenging “Bigger is Better” Myth

Microsoft's recent release of Phi-4-reasoning challenges a key assumption in building artificial intelligence systems capable of reasoning. Since the introduction of chain-of-thought reasoning in [...]

Match Score: 129.32

Destination

2025-02-27

Microsoft expands its SLM lineup with new multimodal and mini Phi-4 models

Microsoft has added two new models to its Phi small language model family: Phi-4-multimodal, which can handle audio, images and text simultaneously, and Phi-4-mini, a streamlined model focused on text [...]

Match Score: 127.80

venturebeat

2025-10-02

'Western Qwen': IBM wows with Granite 4 LLM launch and hybrid Mamba/Transformer architecture

IBM today announced the release of Granite 4.0, the newest generation of its homemade family of open source large language models (LLMs) designed to balance high performance with lower memory and cost [...]

Match Score: 125.11

venturebeat

2025-10-09

Nvidia researchers boost LLMs reasoning skills by getting them to 'think' during pre-training

Researchers at Nvidia have developed a new technique that flips the script on how large language models (LLMs) learn to reason. The method, called reinforcement learning pre-training (RLP), integrates [...]

Match Score: 124.72

venturebeat

2025-11-01

Large reasoning models almost certainly can think

Recently, there has been a lot of hullabaloo about the idea that large reasoning models (LRM) are unable to think. This is mostly due to a research article published by Apple, "The Illusion of Th [...]

Match Score: 109.05

venturebeat

2025-09-29

DeepSeek's new V3.2-Exp model cuts API pricing in half to less than 3 cents per 1M input tokens

DeepSeek continues to push the frontier of generative AI...in this case, in terms of affordability.The company has unveiled its latest experimental large language model (LLM), DeepSeek-V3.2-Exp, that [...]

Match Score: 101.83

venturebeat

2025-10-27

MiniMax-M2 is the new king of open source LLMs (especially for agentic tool calling)

Watch out, DeepSeek and Qwen! There's a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool use — that [...]

Match Score: 100.29

venturebeat

2025-10-28

Microsoft’s Copilot can now build apps and automate your job — here’s how it works

Microsoft is launching a significant expansion of its Copilot AI assistant on Tuesday, introducing tools that let employees build applications, automate workflows, and create specialized AI agents usi [...]

Match Score: 99.98