2025-02-24

After investing more than six months of development time and a year of GPU compute time, Hugging Face has published a free, open-source manual that provides detailed instructions for efficiently training large AI models.
The article Hugging Face explains how train large AI models in the "Ultra-Scale Playbook" appeared first on THE DECODER.
[...]2025-11-17
AI engineers often chase performance by scaling up LLM parameters and data, but the trend toward smaller, more efficient, and better-focused models has accelerated. The Phi-4 fine-tuning methodology [...]
2025-12-02
For much of 2025, the frontier of open-weight language models has been defined not in Silicon Valley or New York City, but in Beijing and Hangzhou.Chinese research labs including Alibaba's Qwen, [...]
2025-10-28
In an industry where model size is often seen as a proxy for intelligence, IBM is charting a different course — one that values efficiency over enormity, and accessibility over abstraction.The 114-y [...]
2025-10-02
IBM today announced the release of Granite 4.0, the newest generation of its homemade family of open source large language models (LLMs) designed to balance high performance with lower memory and cost [...]
2025-10-17
Hugging Face has launched HuggingChat Omni, an AI router that selects the best open source model for each user prompt from more than 100 available models.<br /> The article Hugging Face launches [...]
2025-10-03
Huawei’s Computing Systems Lab in Zurich has introduced a new open-source quantization method for large language models (LLMs) aimed at reducing memory demands without sacrificing output quality. Th [...]