LongCodeZip: Compress Long Code Context for LLMs

08/10/2025 18 min

Listen "LongCodeZip: Compress Long Code Context for LLMs"

Episode Synopsis

The October 2025 paper introduces **LongCodeZip**, a novel, training-free, and model-agnostic framework designed for **compressing long code contexts** to improve the efficiency and capability of Code Large Language Models (LLMs). The core problem addressed is that long code contexts lead to high API costs, increased latency, and model difficulty in identifying relevant information due to the structured nature of code. LongCodeZip utilizes a two-stage hierarchical approach: **coarse-grained compression** selects the most relevant functions based on conditional perplexity (approximated mutual information), followed by **fine-grained compression** that further prunes code within these functions into semantically coherent blocks using perplexity-based chunking and a knapsack optimization to maximize information density. Evaluations across code completion, summarization, and question answering tasks demonstrate that LongCodeZip achieves up to a **5.6x compression ratio** while consistently outperforming existing compression and retrieval-augmented generation (RAG) baselines, even when utilizing a smaller compression model.Source:https://arxiv.org/pdf/2510.00446

More episodes of the podcast AI: post transformers