2025-03-30
Researchers at the Hebrew University of Jerusalem have discovered that the number of documents processed in Retrieval Augmented Generation (RAG) affects language model performance, even when the total text length remains constant.
The article Study finds that fewer documents can lead to better performance in RAG systems appeared first on THE DECODER.
[...]2025-05-10
Retrieval-augmented generation (RAG) promises to help medical AI systems deliver up-to-date and reliable answers. But a new review shows that, so far, RAG rarely works as intended in real-world health [...]
2025-03-31
Retrieval-Augmented Generation (RAG) is an approach to building AI systems that combines a language model with an external knowledge source. In simple terms, the AI first searches for relevant documen [...]
2025-05-28
A new study from Microsoft and Salesforce finds that even state-of-the-art AI language models become dramatically less reliable as conversations get longer and users reveal their requirements step by [...]
2025-06-07
LLMs designed for reasoning, like Claude 3.7 and Deepseek-R1, are supposed to excel at complex problem-solving by simulating thought processes. But a new study by Apple researchers suggests that these [...]
2025-07-10
A new study analyzing 25 language models finds that most do not fake safety compliance - though not due to a lack of capability.<br /> The article Most AI models can fake alignment, but safety t [...]
2025-04-22
A new study from Tsinghua University and Shanghai Jiao Tong University examines whether reinforcement learning with verifiable rewards (RLVR) helps large language models reason better—or simply make [...]