2025-09-30
eSelf, a startup developing interactive, photorealistic talking AI video avatars, has introduced a new feature called Share Screen Analysis that allows its avatars to view and respond to what users display on their screens.
Powered by a combination of a large language model (LLM) from any number of third-party AI providers like OpenAI and Google, as well as a special, custom trained video language model made in house by eSelf, the tech is designed to be an "out-of-the-box" AI solution for enterprises looking to provide customer or employee IT support, guidance, skills development, upskilling, tutorials of new products and features, education, and other interactive business use cases.
In a video interview with VentureBeat, CEO Alan [...]
2025-01-16
Around ten years ago, as the price of cable rose to untenable heights, live TV streaming services arrived as the low-cost, contract-free antidote. The services are still blissfully easy to walk away f [...]
2025-10-01
In the race to deploy generative AI for coding, the fastest tools are not winning enterprise deals. A new VentureBeat analysis, combining a comprehensive survey of 86 engineering teams with our own ha [...]
2025-12-24
It's refreshing when a leading AI company states the obvious. In a detailed post on hardening ChatGPT Atlas against prompt injection, OpenAI acknowledged what security practitioners have known fo [...]
2025-11-30
Hybrid cloud security was built before the current era of automated, machine-based cyberattacks that take just milliseconds to execute and minutes to deliver devastating impacts to infrastructure. The [...]
2025-10-27
Watch out, DeepSeek and Qwen! There's a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool use — that [...]