2025-12-24
It's refreshing when a leading AI company states the obvious. In a detailed post on hardening ChatGPT Atlas against prompt injection, OpenAI acknowledged what security practitioners have known fo [...]
2025-12-22
Unrelenting, persistent attacks on frontier models make them fail, with the patterns of failure varying by model and developer. Red teaming shows that it’s not the sophisticated, complex attacks tha [...]
2025-11-27
VentureBeat recently sat down (virtually) with Itamar Golan, co-founder and CEO of Prompt Security, to chat through the GenAI security challenges organizations of all sizes face. We talked about shado [...]
2025-12-01
One malicious prompt gets blocked, while ten prompts get through. That gap defines the difference between passing benchmarks and withstanding real-world attacks — and it's a gap most enterprise [...]
2025-12-23
OpenAI is using automated red teaming to fight prompt injections in ChatGPT Atlas. The company compares the problem to online fraud against humans, a framing that downplays a technical flaw that could [...]
2025-12-04
Model providers want to prove the security and robustness of their models, releasing system cards and conducting red-team exercises with each new release. But it can be difficult for enterprises to pa [...]
2025-10-01
In the race to deploy generative AI for coding, the fastest tools are not winning enterprise deals. A new VentureBeat analysis, combining a comprehensive survey of 86 engineering teams with our own ha [...]
2025-10-08
The same connectivity that made Anthropic's Model Context Protocol (MCP) the fastest-adopted AI integration standard in 2025 has created enterprise cybersecurity's most dangerous blind spot. [...]