2025-10-13
Researchers at the Massachusetts Institute of Technology (MIT) are gaining renewed attention for developing and open sourcing a technique that allows large language models (LLMs) — like those underpinning ChatGPT and most modern AI chatbots — to improve themselves by generating synthetic data to fine-tune upon.
The technique, known as SEAL (Self-Adapting LLMs), was first described in a paper published back in June and covered by VentureBeat at the time.
A significantly expanded and updated version of the paper was released last month, as well [...]
2025-12-10
The tools are available to everyone. The subscription is company-wide. The training sessions have been held. And yet, in offices from Wall Street to Silicon Valley, a stark divide is opening between w [...]
2025-11-11
Researchers at Meta FAIR and the National University of Singapore have developed a new reinforcement learning framework for self-improving AI systems. Called Self-Play In Corpus Environments (SPICE), [...]
2025-03-21
New studies from OpenAI and MIT Media Lab found that, generally, the more time users spend talking to ChatGPT, the lonelier they feel. The connection was made as part of two, yet-to-be-peer-reviewed s [...]
2025-12-04
OpenAI researchers have introduced a novel method that acts as a "truth serum" for large language models (LLMs), compelling them to self-report their own misbehavior, hallucinations and poli [...]
2025-10-16
A new framework from Stanford University and SambaNova addresses a critical challenge in building robust AI agents: context engineering. Called Agentic Context Engineering (ACE), the framework automat [...]
2025-10-08
The trend of AI researchers developing new, small open source generative models that outperform far larger, proprietary peers continued this week with yet another staggering advancement.Alexia Jolicoe [...]
2025-11-10
Baseten, the AI infrastructure company recently valued at $2.15 billion, is making its most significant product pivot yet: a full-scale push into model training that could reshape how enterprises wean [...]
2025-10-29
When researchers at Anthropic injected the concept of "betrayal" into their Claude AI model's neural networks and asked if it noticed anything unusual, the system paused before respondi [...]