For more than a decade, Nvidia’s GPUs have underpinned nearly every major advance in modern AI. That position is now being challenged. Frontier models such as Google’s Gemini 3 and Anthropic’s Claude 4.5 Opus were trained not on Nvidia hardware, but on Google’s latest Tensor Processing Units, the Ironwood-based TPUv7. This signals that a viable alternative to the GPU-centric AI stack has already arrived — one with real implications for the economics and architecture of frontier-scale training.Nvidia's CUDA (Compute Unified Device Architecture), the platform that provides access to the GPU's massive parallel architecture, and its surrounding tools have created what many have dubbed the "CUDA moat"; once a team has built pipelines on CUDA, switching to another [...]
Google Cloud is introducing what it calls its most powerful artificial intelligence infrastructure to date, unveiling a seventh-generation Tensor Processing Unit and expanded Arm-based computing optio [...]
For the last 24 months, one narrative justified every over-provisioned data center and bloated IT budget: the GPU scramble. Silicon was the new oil, and H100s traded like contraband. Reserve capacity [...]
Nvidia’s $20 billion strategic licensing deal with Groq represents one of the first clear moves in a four-front fight over the future AI stack. 2026 is when that fight becomes obvious to enterprise [...]
At the start of the month, Elon Musk announced that two of his companies — SpaceX and xAI — were merging, and would jointly launch a constellation of 1 million satellites to operate as orbital d [...]
Google is shifting from an internal chip user to a retailer, a move that directly challenges Nvidia's market dominance. A new analysis suggests the mere existence of Google's latest TPUs is [...]
Google used its Cloud Next '26 conference to unveil its eighth-generation TPUs, a revamped agent platform, and a new AI layer for Workspace. The company is pitching the whole package under the ba [...]