2025-05-20
Google has started testing a reasoning model called Deep Think for Gemini 2.5 Pro, the company has revealed at its I/O developer conference. According to DeepMind CEO Demis Hassabis, Gemini's Deep Think uses "the latest cutting-edge research" that gives the model the capability to consider multiple hypotheses before responding to queries. Google says it got an "impressive score" when evaluated using questions from the 2025 United States of America Mathematical Olympiad competition. However, Google wants to take more time to conduct safety evaluations and get further input from safety experts before releasing it widely. That' [...]
2025-05-20
Today is one of the most important days on the tech calendar as Google kicked off its I/O developer event with its annual keynote. As ever, the company had many updates for a wide range of products to [...]
2025-02-28
The keyword for the iPhone 16e seems to be "compromise." In this episode, Devindra chats with Cherlynn about her iPhone 16e review and try to figure out who this phone is actually for. Also, [...]
2025-10-27
Watch out, DeepSeek and Qwen! There's a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool use — that [...]
2025-10-08
The trend of AI researchers developing new, small open source generative models that outperform far larger, proprietary peers continued this week with yet another staggering advancement.Alexia Jolicoe [...]
2025-10-07
Some of the largest providers of large language models (LLMs) have sought to move beyond multimodal chatbots — extending their models out into "agents" that can actually take more actions [...]
2025-03-13
After being one of the first companies to roll out a Deep Research feature at the end of last year, Google is now making that same tool available to everyone. Starting today, Gemini users can try Deep [...]
2025-10-09
Researchers at Nvidia have developed a new technique that flips the script on how large language models (LLMs) learn to reason. The method, called reinforcement learning pre-training (RLP), integrates [...]
2025-10-20
Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called Markovian Thinking, the approach allows LLMs t [...]