venturebeat
Fine-tuning vs. in-context learning: New research guides better LLM customization for real-world tasks

By combining fine-tuning and in-context learning, you get LLMs that can learn tasks that would be too difficult or expensive for either method [...]

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Google’s new Deep Research and Deep Research Max agents can search the web and your private data

Google on Monday unveiled the most significant upgrade to its autonomous research agent capabilities since the product's debut, launching two new agents — Deep Research and Deep Research Max [...]

Match Score: 143.67

venturebeat
ACE prevents context collapse with ‘evolving playbooks’ for self-improving AI agents

A new framework from Stanford University and SambaNova addresses a critical challenge in building robust AI agents: context engineering. Called Agentic Context Engineering (ACE), the framework automat [...]

Match Score: 141.94

venturebeat
MIT's new fine-tuning method lets LLMs learn new skills without losing old ones

When enterprises fine-tune LLMs for new tasks, they risk breaking everything the models already know. This forces companies to maintain separate models for every skill.Researchers at MIT, the Improbab [...]

Match Score: 118.61

venturebeat
Self-improving language models are becoming reality with MIT's updated SEAL technique

Researchers at the Massachusetts Institute of Technology (MIT) are gaining renewed attention for developing and open sourcing a technique that allows large language models (LLMs) — like those underp [...]

Match Score: 114.40

venturebeat
Baseten takes on hyperscalers with new AI training platform that lets you own your model weights

Baseten, the AI infrastructure company recently valued at $2.15 billion, is making its most significant product pivot yet: a full-scale push into model training that could reshape how enterprises wean [...]

Match Score: 108.21

venturebeat
Researchers find that retraining only small parts of AI models can cut costs and prevent forgetting

Enterprises often find that when they fine-tune models, one effective approach to making a large language model (LLM) fit for purpose and grounded in data is to have the model lose some of its abiliti [...]

Match Score: 107.89

venturebeat
MemRL outperforms RAG on complex agent benchmarks without fine-tuning

A new technique developed by researchers at Shanghai Jiao Tong University and other institutions enables large language model agents to learn new skills without the need for expensive fine-tuning.The [...]

Match Score: 105.99

venturebeat
Phi-4 proves that a 'data-first' SFT methodology is the new differentiator

AI engineers often chase performance by scaling up LLM parameters and data, but the trend toward smaller, more efficient, and better-focused models has accelerated. The Phi-4 fine-tuning methodology [...]

Match Score: 105.00

venturebeat
GAM takes aim at “context rot”: A dual-agent memory architecture that outperforms long-context LLMs

For all their superhuman power, today’s AI models suffer from a surprisingly human flaw: They forget. Give an AI assistant a sprawling conversation, a multi-step reasoning task or a project spanning [...]

Match Score: 101.97