Destination

2025-06-07

Apple study finds "a fundamental scaling limitation" in reasoning models' thinking abilities

A new Apple study shows that current reasoning models such as Claude 3.7 Thinking or Deepseek-R1 not only fail with complex logic tasks, but paradoxically even think less with increasing difficulty. The models show three levels of performance: for simple tasks, classic language models without a special thinking function are more precise; for medium complexity, reasoning models have advantages; for high complexity, all models break down completely - regardless of the available computing budget. The researchers speak of a fundamental scaling limit of the reasoning approach and do not see any generalizable problem-so [...]</p>
                    <!-- Buttons -->
			        <div class= Discover Copy

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat

2025-11-06

Moonshot's Kimi K2 Thinking emerges as leading open source AI, outperforming GPT-5, Claude Sonnet 4.5 on key benchmarks

Even as concern and skepticism grows over U.S. AI startup OpenAI's buildout strategy and high spending commitments, Chinese open source AI providers are escalating their competition and one has e [...]

Match Score: 200.24

venturebeat

2025-11-17

Phi-4 proves that a 'data-first' SFT methodology is the new differentiator

AI engineers often chase performance by scaling up LLM parameters and data, but the trend toward smaller, more efficient, and better-focused models has accelerated. The Phi-4 fine-tuning methodology [...]

Match Score: 153.37

venturebeat

2025-11-12

Baidu just dropped an open-source multimodal AI that it claims beats GPT-5 and Gemini

Baidu Inc., China's largest search engine company, released a new artificial intelligence model on Monday that its developers claim outperforms competitors from Google and OpenAI on several visio [...]

Match Score: 131.77

venturebeat

2025-10-20

New 'Markovian Thinking' technique unlocks a path to million-token AI reasoning

Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called Markovian Thinking, the approach allows LLMs t [...]

Match Score: 111.09

venturebeat

2025-11-14

Google’s new AI training method helps small models tackle complex reasoning

Researchers at Google Cloud and UCLA have proposed a new reinforcement learning framework that significantly improves the ability of language models to learn very challenging multi-step reasoning task [...]

Match Score: 110.93

venturebeat

2025-11-01

Large reasoning models almost certainly can think

Recently, there has been a lot of hullabaloo about the idea that large reasoning models (LRM) are unable to think. This is mostly due to a research article published by Apple, "The Illusion of Th [...]

Match Score: 108.65

venturebeat

2025-10-08

Samsung AI researcher's new, open reasoning model TRM outperforms models 10,000X larger — on specific problems

The trend of AI researchers developing new, small open source generative models that outperform far larger, proprietary peers continued this week with yet another staggering advancement.Alexia Jolicoe [...]

Match Score: 100.55

venturebeat

2025-10-30

Meta researchers open the LLM black box to repair flawed AI reasoning

Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its [...]

Match Score: 96.94

venturebeat

2025-10-27

MiniMax-M2 is the new king of open source LLMs (especially for agentic tool calling)

Watch out, DeepSeek and Qwen! There's a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool use — that [...]

Match Score: 93.07