/ THE CORE

The KV Cache: The Hidden Bottleneck of Long-Context LLMs

Context windows keep getting bigger, but the real constraint on long-context performance isn't the model — it's the memory footprint of the KV cache. Here's what that means for anyone running LLMs at scale.

Visualization of a KV cache growing in GPU memory as the context window increases, with compression layers applied

The thing most people don't think about

When a model vendor advertises a million-token context window, most developers assume the bottleneck is somewhere in the attention math — and that throwing more compute at the problem will make it go away. The reality is stranger. The thing that actually makes long contexts painful to run isn't the attention computation. It's a data structure called the KV cache, and it scales in ways that catch most teams off guard.

Google's TurboQuant paper at ICLR 2026 put the problem back in the spotlight by showing dramatic memory savings from aggressive KV cache compression. But the underlying issue has been quietly shaping inference economics for years.

What the KV cache actually is

Every transformer layer computes two tensors for each token it processes: a key and a value. These get used by the attention mechanism to let later tokens "look back" at earlier ones. To avoid recomputing keys and values for the same tokens on every decoding step, they're cached in GPU memory — hence KV cache.

The cache grows linearly with the sequence length. For a typical frontier model, each cached token takes on the order of hundreds of kilobytes. Multiply that by a million tokens and you're looking at tens of gigabytes of GPU memory per in-flight request, just for the cache.

This is why you can't simply scale context windows by throwing more tokens at the model. The memory footprint of the cache becomes the hard constraint long before anything else does.

The production consequences

Once you understand the KV cache, several things that looked weird start making sense:

The reason long-context inference is expensive isn't that the model is doing more thinking. It's that each request holds a much bigger chunk of GPU memory hostage for the whole session.

How the cache gets smaller

The last eighteen months of inference research have been, to a surprising degree, about shrinking the KV cache without degrading model quality. The main approaches:

Quantization

Storing the K and V tensors in 8-bit or 4-bit precision instead of 16-bit can cut memory usage in half or more. Most production inference stacks now do some form of this automatically. The quality impact is usually minimal for 8-bit and noticeable but tolerable for 4-bit on most tasks.

Eviction and pruning

Not every token in the context is equally important. Techniques like H2O, SnapKV, and streaming LLM identify which cached tokens contribute meaningfully to future attention and discard the rest. For very long contexts, you can often throw away 70–90% of the cache with minimal quality loss.

Compression

The newest wave of techniques — TurboQuant being a recent example — applies vector rotation and dimensionality reduction to the cached tensors, compressing them further than naive quantization allows. These methods push the limits of what's possible without modifying the model itself.

Cache sharing

When multiple requests share a common prefix (think: a long system prompt reused across users), the KV cache for that prefix can be computed once and shared. This is what "prompt caching" features from major providers are actually doing under the hood. The savings are substantial for workloads with stable preambles.

The free optimization most teams miss If you're using a provider that supports prompt caching and you have a long system prompt or set of instructions that rarely changes, turning on caching can cut your long-context costs by 50% or more. It's usually one configuration flag. Turn it on.

What this means for architecture decisions

Once you internalize that the KV cache is the real constraint, some architecture decisions look different:

Where this goes

The trajectory is clear: KV cache compression is getting aggressive enough that effective context windows can keep growing without inference costs growing proportionally. That decoupling is one of the quieter but more important shifts in the industry right now. It's what's going to make things like persistent agent memory, truly long document analysis, and real-time multi-turn reasoning economically viable — not better attention mechanisms, but better cache engineering.

The next time you see a model announcement with a bigger context window, the interesting question isn't "can it actually use all that context?" It's "what are they doing to the KV cache to make it affordable?"

Link copied!