Fast KV Compaction via Attention Matching

Published: (February 19, 2026 at 11:46 PM EST)
2 min read

Source: Hacker News

Abstract

Scaling language models to long contexts is often bottlenecked by the size of the key-value (KV) cache. In deployed settings, long contexts are typically managed through compaction in token space via summarization. However, summarization can be highly lossy, substantially harming downstream performance. Recent work on Cartridges has shown that it is possible to train highly compact KV caches in latent space that closely match full‑context performance, but at the cost of slow and expensive end‑to‑end optimization. This work describes an approach for fast context compaction in latent space through Attention Matching, which constructs compact keys and values to reproduce attention outputs and preserve attention mass at a per‑KV‑head level. We show that this formulation naturally decomposes into simple subproblems, some of which admit efficient closed‑form solutions. Within this framework, we develop a family of methods that significantly push the Pareto frontier of compaction time versus quality, achieving up to 50× compaction in seconds on some datasets with little quality loss.

Subjects

  • Machine Learning (cs.LG)

Citation

DOI

Submission history

  • v1 – Wed, 18 Feb 2026 09:06:53 UTC (349 KB) – submitted by Adam Zweiger (view email)
0 views
Back to Blog

Related posts

Read more »

Why LLMs Alone Are Not Agents

Introduction Large language models are powerful, but calling them “agents” on their own is a category mistake. This confusion shows up constantly in real proje...

Does AI have a hero gene?

Emergent Collaborative Recovery in Multi‑Agent Teams This is a two‑part series about the architecture and events surrounding an extraordinary moment when an AI...