OpenAI has launched a new web page called the safety evaluations hub to publicly share information related to things like the hallucination rates of its models. The hub will also highlight if a model produces harmful content, how well it behaves as instructed and attempted jailbreaks. <br /> The tech company claims this new page will provide additional transparency on OpenAI, a company that, for context, has faced multiple lawsuits alleging it illegally used copyrighted material to train its AI models. Oh, yeah, and it's worth mentioning that The New York Times claims the tech company accidentally deleted evidence in the newspaper's plagiarism case against it.<br /> The safety evaluations hub is meant to expand on OpenAI's system cards. They only outline a devel [...]
OpenAI's GPT-5.3 Instant — the company's most widely used model — reduces hallucinations by up to 26.8% compared to its predecessor, prioritizing accuracy and conversational reliability [...]
OpenAI on Thursday launched GPT-5.3-Codex-Spark, a stripped-down coding model engineered for near-instantaneous response times, marking the company's first significant inference partnership outsi [...]
The AI updates aren't slowing down. Literally two days after OpenAI launched a new underlying AI model for ChatGPT called GPT-5.3 Instant, the company has unveiled another, even more massive upgr [...]
OpenAI on Monday launched a set of interactive visual tools inside ChatGPT that let users manipulate mathematical and scientific formulas in real time — a genuinely impressive education feature that [...]
OpenAI on Wednesday released GPT-5.3-Codex, which the company calls its most capable coding agent to date, in an announcement timed to land at the exact same moment Anthropic unveiled its own flagship [...]
OpenAI today announced the release of Sora 2, its latest video generation model, which now includes AI generated audio matching the generated video, as well.It is paired with the launch of a new iOS a [...]
OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a si [...]
Large language models (LLMs) have astounded the world with their capabilities, yet they remain plagued by unpredictability and hallucinations – confidently outputting incorrect information. In high- [...]