2025-05-27

Microsoft's recent release of Phi-4-reasoning challenges a key assumption in building artificial intelligence systems capable of reasoning. Since the introduction of chain-of-thought reasoning in 2022, researchers believed that advanced reasoning required very large language models with hundreds of billions of parameters. However, Microsoft’s new 14-billion parameter model, Phi-4-reasoning, questions this belief. Using a data-centric approach […]
2025-11-17
AI engineers often chase performance by scaling up LLM parameters and data, but the trend toward smaller, more efficient, and better-focused models has accelerated. The Phi-4 fine-tuning methodology [...]
2025-02-27
Microsoft has added two new models to its Phi small language model family: Phi-4-multimodal, which can handle audio, images and text simultaneously, and Phi-4-mini, a streamlined model focused on text [...]
2025-07-12
Microsoft has introduced Phi-4-mini-flash-reasoning, a lightweight AI model built for scenarios with tight computing, memory, or latency limits. Designed for edge devices and mobile apps, the model ai [...]
2025-05-01
Microsoft is expanding its Phi series of compact language models with three new variants designed for advanced reasoning tasks.<br /> The article Microsoft's Phi-4-reasoning models outperfo [...]
2025-08-19
Game Science Studio isn't resting on its laurels after the success of Black Myth: Wukong. The developer teased a follow-up project as the closer to Gamescom's Opening Night Live showcase wit [...]
2025-10-08
The latest addition to the small model wave for enterprises comes from AI21 Labs, which is betting that bringing models to devices will free up traffic in data centers. AI21’s Jamba Reasoning 3B, a [...]
2025-11-14
Researchers at Google Cloud and UCLA have proposed a new reinforcement learning framework that significantly improves the ability of language models to learn very challenging multi-step reasoning task [...]
2025-01-08
Microsoft Research's new Phi-4 LLM matches the abilities of much larger models while using just 14 billion parameters - about one-fifth the size of similar systems.<br /> The article Micros [...]
2025-10-30
Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its [...]