venturebeat
The team behind continuous batching says your idle GPUs should be running inference, not sitting dark

Every GPU cluster has dead time. Training jobs finish, workloads shift and hardware sits dark while power and cooling costs keep running. For neocloud operators, those empty cycles are lost margin.The obvious workaround is spot GPU markets — renting spare capacity to whoever needs it. But spot instances mean the cloud vendor is still the one doing the renting, and engineers buying that capacity are still paying for raw compute with no inference stack attached. FriendliAI's answer is different: run inference directly on the unused hardware, optimize for token throughput, and split the revenue with the operator. FriendliAI was founded by Byung-Gon Chun, the researcher whose paper on continuous batching became foundational to vLLM, the open source inference engine used across most prod [...]

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Baseten takes on hyperscalers with new AI training platform that lets you own your model weights

Baseten, the AI infrastructure company recently valued at $2.15 billion, is making its most significant product pivot yet: a full-scale push into model training that could reshape how enterprises wean [...]

Match Score: 145.00

venturebeat
Together AI's ATLAS adaptive speculator delivers 400% inference speedup by learning from workloads in real-time

Enterprises expanding AI deployments are hitting an invisible performance wall. The culprit? Static speculators that can't keep up with shifting workloads.Speculators are smaller AI models that w [...]

Match Score: 111.13

venturebeat
Inference is splitting in two — Nvidia’s $20B Groq bet explains its next act

Nvidia’s $20 billion strategic licensing deal with Groq represents one of the first clear moves in a four-front fight over the future AI stack. 2026 is when that fight becomes obvious to enterprise [...]

Match Score: 105.67

venturebeat
AI inference costs dropped up to 10x on Nvidia's Blackwell — but hardware is only half the equation

Lowering the cost of inference is typically a combination of hardware and software. A new analysis released Thursday by Nvidia details how four leading inference providers are reporting 4x to 10x redu [...]

Match Score: 100.12

Destination
How to buy a GPU in 2025

One of the trickiest parts of any new computer build or upgrade is finding the right video card. In a gaming PC, the GPU is easily the most important component, and you can hamstring your experience b [...]

Match Score: 83.19

venturebeat
Open source Mamba 3 arrives to surpass Transformer architecture with nearly 4% improved language modeling, reduced latency

The generative AI era began for most people with the launch of OpenAI's ChatGPT in late 2022, but the underlying technology — the "Transformer" neural network architecture that allows [...]

Match Score: 79.86

venturebeat
Train-to-Test scaling explained: How to optimize your end-to-end AI compute budget for inference

The standard guidelines for building large language models (LLMs) optimize only for training costs and ignore inference costs. This poses a challenge for real-world applications that use inference-tim [...]

Match Score: 76.29

venturebeat
OpenAI deploys Cerebras chips for 15x faster code generation in first major move beyond Nvidia

OpenAI on Thursday launched GPT-5.3-Codex-Spark, a stripped-down coding model engineered for near-instantaneous response times, marking the company's first significant inference partnership outsi [...]

Match Score: 75.55

venturebeat
Breaking through AI’s memory wall with token warehousing

As agentic AI moves from experiments to real production workloads, a quiet but serious infrastructure problem is coming into focus: memory. Not compute. Not models. Memory.Under the hood, today’s GP [...]

Match Score: 73.35