2025-11-10
Meta has just released a new multilingual automatic speech recognition (ASR) system supporting 1,600+ languages — dwarfing OpenAI’s open source Whisper model, which supports just 99.
Is architecture also allows developers to extend that support to thousands more. Through a feature called zero-shot in-context learning, users can provide a few paired examples of audio and text in a new language at inference time, enabling the model to transcribe additional utterances in that language without any retraining.
In practice, this expands potential coverage to more than 5,400 languages — roughly every spoken language with a known script.
It’s a shift from static model capabilities to a flexible framework that comm [...]
2025-12-04
Model providers want to prove the security and robustness of their models, releasing system cards and conducting red-team exercises with each new release. But it can be difficult for enterprises to pa [...]
2025-11-10
Meta's Fundamental AI Research (FAIR) team has introduced Omnilingual ASR, an automatic speech recognition system that can transcribe spoken language in more than 1,600 languages.<br /> The [...]
2025-12-22
Unrelenting, persistent attacks on frontier models make them fail, with the patterns of failure varying by model and developer. Red teaming shows that it’s not the sophisticated, complex attacks tha [...]
2025-10-31
Some of the most successful creators on Facebook aren't names you'd ever recognize. In fact, many of their pages don't have a face or recognizable persona attached. Instead, they run pa [...]
2025-12-01
One malicious prompt gets blocked, while ten prompts get through. That gap defines the difference between passing benchmarks and withstanding real-world attacks — and it's a gap most enterprise [...]
2025-10-01
In a lot of ways, Meta's hasn't changed much with its second-gen Ray-Ban glasses. The latest model has the same design and largely the same specs as the originals, with two important upgrade [...]