Destination

2025-10-09

Researchers find just 250 malicious documents can leave LLMs vulnerable to backdoors

Artificial intelligence companies have been working at breakneck speeds to develop the best and most powerful tools, but that rapid development hasn't always been coupled with clear understandings of AI's limitations or weaknesses. Today, Anthropic released a report on how attackers can influence the development of a large language model.

Discover Copy

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat

2025-11-04

98% of market researchers use AI daily, but 4 in 10 say it makes errors — revealing a major trust problem

Market researchers have embraced artificial intelligence at a staggering pace, with 98% of professionals now incorporating AI tools into their work and 72% using them daily or more frequently, accordi [...]

Match Score: 100.15

Destination

2025-10-10

The Morning After: Our verdict on the Pixel 10 Pro Fold

A little after the launch of the rest of the Pixel 10 family, Google’s new foldable is here. The Pixel 10 Pro Fold is a beast — which may not be the first thing you want to hear about a foldable. [...]

Match Score: 78.51

Destination

2025-02-28

Engadget Podcast: iPhone 16e review and Amazon's AI-powered Alexa+

The keyword for the iPhone 16e seems to be "compromise." In this episode, Devindra chats with Cherlynn about her iPhone 16e review and try to figure out who this phone is actually for. Also, [...]

Match Score: 74.34

venturebeat

2025-10-08

MCP stacks have a 92% exploit probability: How 10 plugins became enterprise security's biggest blind spot

The same connectivity that made Anthropic's Model Context Protocol (MCP) the fastest-adopted AI integration standard in 2025 has created enterprise cybersecurity's most dangerous blind spot. [...]

Match Score: 66.04

Destination

2025-04-28

Researchers secretly experimented on Reddit users with AI-generated comments

A group of researchers covertly ran a months-long "unauthorized" experiment in one of Reddit’s most popular communities using AI-generated comments to test the persuasiveness of large lang [...]

Match Score: 56.92

venturebeat

2025-10-16

ACE prevents context collapse with ‘evolving playbooks’ for self-improving AI agents

A new framework from Stanford University and SambaNova addresses a critical challenge in building robust AI agents: context engineering. Called Agentic Context Engineering (ACE), the framework automat [...]

Match Score: 46.14

venturebeat

2025-10-30

Meta researchers open the LLM black box to repair flawed AI reasoning

Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its [...]

Match Score: 45.85

venturebeat

2025-10-20

New 'Markovian Thinking' technique unlocks a path to million-token AI reasoning

Researchers at Mila have proposed a new technique that makes large language models (LLMs) vastly more efficient when performing complex reasoning. Called Markovian Thinking, the approach allows LLMs t [...]

Match Score: 44.83

venturebeat

2025-09-30

Meta’s new CWM model learns how code works, not just what it looks like

Meta’s AI research team has released a new large language model (LLM) for coding that enhances code understanding by learning not only what code looks like, but also what it does when executed. The [...]

Match Score: 44.30