venturebeat
MIT’s new ‘recursive’ framework lets LLMs process 10 million tokens without context rot

Recursive language models (RLMs) are an inference technique developed by researchers at MIT CSAIL that treat long prompts as an external environment to the model. Instead of forcing the entire prompt into the model's context window, the framework allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the text.Rather than expanding context windows or summarizing old information, the MIT team reframes long-context reasoning as a systems problem. By letting models treat prompts as something they can inspect with code, recursive language models allow LLMs to reason over millions of tokens without retraining. This offers enterprises a practical path to long-horizon tasks like codebase analysis, legal review, and multi-step reasoning that routine [...]

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
DeepSeek drops open-source model that compresses text 10x through images, defying conventions

DeepSeek, the Chinese artificial intelligence research company that has repeatedly challenged assumptions about AI development costs, has released a new model that fundamentally reimagines how large l [...]

Match Score: 174.55

venturebeat
GAM takes aim at “context rot”: A dual-agent memory architecture that outperforms long-context LLMs

For all their superhuman power, today’s AI models suffer from a surprisingly human flaw: They forget. Give an AI assistant a sprawling conversation, a multi-step reasoning task or a project spanning [...]

Match Score: 165.82

venturebeat
ACE prevents context collapse with ‘evolving playbooks’ for self-improving AI agents

A new framework from Stanford University and SambaNova addresses a critical challenge in building robust AI agents: context engineering. Called Agentic Context Engineering (ACE), the framework automat [...]

Match Score: 131.18

venturebeat
Samsung AI researcher's new, open reasoning model TRM outperforms models 10,000X larger — on specific problems

The trend of AI researchers developing new, small open source generative models that outperform far larger, proprietary peers continued this week with yet another staggering advancement.Alexia Jolicoe [...]

Match Score: 126.98

venturebeat
Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain

One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with an infostealer. That combination created a walk-in path to Vercel’s production environments through an OAuth gr [...]

Match Score: 110.19

Destination
Framework Desktop (2025) Review: Powerful, but perhaps not for everyone

The most obvious question is “Why?” <br /> Framework builds modular, repairable laptops that anyone can take apart and put back together again. It’s a big deal in an era where laptops are [...]

Match Score: 100.53

Destination
Framework Laptop 12 review: Doing the right thing comes at a cost

Earlier this year, Framework announced it was making a smaller, 12-inch laptop and a beefy desktop to go alongside its 13- and 16-inch notebooks. A few months later, and the former has arrived, puttin [...]

Match Score: 92.34

venturebeat
IndexCache, a new sparse attention optimizer, delivers 1.82x faster inference on long-context AI models

Processing 200,000 tokens through a large language model is expensive and slow: the longer the context, the faster the costs spiral. Researchers at Tsinghua University and Z.ai have built a technique [...]

Match Score: 88.00

venturebeat
DeepSeek's new V3.2-Exp model cuts API pricing in half to less than 3 cents per 1M input tokens

DeepSeek continues to push the frontier of generative AI...in this case, in terms of affordability.The company has unveiled its latest experimental large language model (LLM), DeepSeek-V3.2-Exp, that [...]

Match Score: 87.72