2025-10-03
Huawei’s Computing Systems Lab in Zurich has introduced a new open-source quantization method for large language models (LLMs) aimed at reducing memory demands without sacrificing output quality.
The technique, called SINQ (Sinkhorn-Normalized Quantization), is designed to be fast, calibration-free, and easy to integrate into existing model workflows. The code for performing it has been made available by the Huawei research team on Github and Hugging Face under a permissive, enterprise-friendly Apache 2.0 license, allowing organizations to take and use it, modify it, and deploy it commercially — all f [...]
2025-07-02
A US judge has ruled that Huawei must stand trial following a 16-count indictment from 2019 accusing the Chinese telecommunications company of trying to steal trade secrets from its US rivals and sell [...]
2025-10-02
IBM today announced the release of Granite 4.0, the newest generation of its homemade family of open source large language models (LLMs) designed to balance high performance with lower memory and cost [...]
2025-03-20
Because of sanctions that will prevent Huawei’s latest foldable from going on sale in the US, many folks who are interested in the handset will never lay eyes on it in person. Still, you might want [...]
2025-03-14
Several people have been arrested as part of a corruption investigation linked to the European Parliament and Huawei. The company is suspected of bribing European Union officials, according to the Ass [...]
2025-02-28
The keyword for the iPhone 16e seems to be "compromise." In this episode, Devindra chats with Cherlynn about her iPhone 16e review and try to figure out who this phone is actually for. Also, [...]