MorphKV: Constant-Sized KV Caches for LLM Inference

04/11/2025 13 min

Listen "MorphKV: Constant-Sized KV Caches for LLM Inference"

Episode Synopsis

The June 7, 2025 UT Austin and University of British Colombia collaboration academic paper introduces **MorphKV**, a novel inference-time technique designed to address the excessive memory consumption caused by Key-Value (KV) caches in Large Language Models (LLMs) during extended responses. The core problem is that KV cache size grows linearly with sequence length, straining GPU memory, leading to prior methods sacrificing accuracy by dropping context or using lossy compression. MorphKV resolves this by maintaining a **constant-sized KV cache** through a dynamic, **correlation-aware token selection** mechanism that retains the most relevant older tokens based on the attention profiles of recent tokens. Evaluations on long-response tasks, such as content creation and code generation, demonstrate that MorphKV achieves significant memory savings (**up to 52.9%**) while delivering higher accuracy (**up to 18.2%**) compared to state-of-the-art compression methods like SnapKV and H2O. The research emphasizes the distinction between long-context and long-response tasks, positioning MorphKV as a robust solution particularly for the latter by efficiently managing memory throughout the decoding phase.Source:https://arxiv.org/pdf/2503.00979