Papers
arxiv:2510.00446

LongCodeZip: Compress Long Context for Code Language Models

Published on Oct 1
ยท Submitted by Yuling on Oct 3
#2 Paper of the day
Authors:
,
,
,

Abstract

LongCodeZip is a code compression framework for LLMs that uses dual-stage compression to reduce context size without degrading performance, improving efficiency in code intelligence applications.

AI-generated summary

Code generation under long contexts is becoming increasingly critical as Large Language Models (LLMs) are required to reason over extensive information in the codebase. While recent advances enable code LLMs to process long inputs, high API costs and generation latency remain substantial bottlenecks. Existing context pruning techniques, such as LLMLingua, achieve promising results for general text but overlook code-specific structures and dependencies, leading to suboptimal performance in programming tasks. In this paper, we propose LongCodeZip, a novel plug-and-play code compression framework designed specifically for code LLMs. LongCodeZip employs a dual-stage strategy: (1) coarse-grained compression, which identifies and ranks function-level chunks using conditional perplexity with respect to the instruction, retaining only the most relevant functions; and (2) fine-grained compression, which segments retained functions into blocks based on perplexity and selects an optimal subset under an adaptive token budget to maximize relevance. Evaluations across multiple tasks, including code completion, summarization, and question answering, show that LongCodeZip consistently outperforms baseline methods, achieving up to a 5.6x compression ratio without degrading task performance. By effectively reducing context size while preserving essential information, LongCodeZip enables LLMs to better scale to real-world, large-scale code scenarios, advancing the efficiency and capability of code intelligence applications.

Community

Paper author Paper submitter

How to compress long code context? ๐Ÿ“š

Check out our LongCodeZip! Paper just got accepted to ASE 2025. ๐Ÿ”ฅ

Code: https://github.com/YerbaPage/LongCodeZip
Paper: https://huggingface.co/papers/2510.00446

Congratulation boys!!

Ranks function-level chunks conditional perplexions relative to the instruction, proportionating the ATB.
This is awesome work, guys! Congratulations! ๐Ÿ”ฅ๐Ÿ‘

3Pf5RWob0LBpyT0yMoM2-

This is promising will try !

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.00446 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.00446 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.00446 in a Space README.md to link it from this page.

Collections including this paper 7