2025-06-12

Meta has introduced V-JEPA 2, a 1.2-billion-parameter video model designed to connect intuitive physical understanding with robot control. The system achieves state-of-the-art results on motion recognition and action prediction benchmarks—and can control robots without additional training.
The article Meta’s latest model highlights the challenge AI faces in long-term planni [...]
2025-10-30
Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its [...]
2025-10-31
Some of the most successful creators on Facebook aren't names you'd ever recognize. In fact, many of their pages don't have a face or recognizable persona attached. Instead, they run pa [...]
2025-10-01
In a lot of ways, Meta's hasn't changed much with its second-gen Ray-Ban glasses. The latest model has the same design and largely the same specs as the originals, with two important upgrade [...]
2025-10-27
Watch out, DeepSeek and Qwen! There's a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool use — that [...]
2025-09-18
At Meta Connect 2025's kickoff event, Mark Zuckerberg unveiled a trio of new smart eyewear, including its first model with augmented reality. Meta's boss also announced the second generation [...]
2025-10-08
The trend of AI researchers developing new, small open source generative models that outperform far larger, proprietary peers continued this week with yet another staggering advancement.Alexia Jolicoe [...]