Abstract
Compression of KV caches in LLMs can lead to performance degradation in multi-instruction tasks, particularly affecting system prompt leakage, but improved eviction policies can mitigate these issues.
KV cache compression promises increased throughput and efficiency with negligible loss in performance. While the gains in throughput are indisputable and recent literature has indeed shown minimal degradation on particular benchmarks, in general the consequences of compression in realistic scenarios such as multi-instruction prompting have been insufficiently studied. In this paper, we identify several pitfalls practitioners should be aware of when deploying KV cache compressed LLMs. Importantly, we show that certain instructions degrade much more rapidly with compression, effectively causing them to be completely ignored by the LLM. As a practical example of that, we highlight system prompt leakage as a case study, empirically showing the impact of compression on leakage and general instruction following. We show several factors that play a role in prompt leakage: compression method, instruction order, and KV eviction bias. We then propose simple changes to KV cache eviction policies that can reduce the impact of these factors and improve the overall performance in multi-instruction tasks.
Community
The Pitfalls of KV Cache Compression
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- KVCompose: Efficient Structured KV Cache Compression with Composite Tokens (2025)
- Expected Attention: KV Cache Compression by Estimating Attention from Future Queries Distribution (2025)
- PagedEviction: Structured Block-wise KV Cache Pruning for Efficient Large Language Model Inference (2025)
- Adaptive KV-Cache Compression without Manually Setting Budget (2025)
- LAVa: Layer-wise KV Cache Eviction with Dynamic Budget Allocation (2025)
- Value-Guided KV Compression for LLMs via Approximated CUR Decomposition (2025)
- CommonKV: Compressing KV Cache with Cross-layer Parameter Sharing (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper