DeepPrune: Parallel Scaling without Inter-trace Redundancy
Abstract
DeepPrune, a novel framework using dynamic pruning and a specialized judge model, significantly reduces computational inefficiency in parallel scaling of large language models by pruning redundant reasoning traces.
Parallel scaling has emerged as a powerful paradigm to enhance reasoning capabilities in large language models (LLMs) by generating multiple Chain-of-Thought (CoT) traces simultaneously. However, this approach introduces significant computational inefficiency due to inter-trace redundancy -- our analysis reveals that over 80% of parallel reasoning traces yield identical final answers, representing substantial wasted computation. To address this critical efficiency bottleneck, we propose DeepPrune, a novel framework that enables efficient parallel scaling through dynamic pruning. Our method features a specialized judge model trained with focal loss and oversampling techniques to accurately predict answer equivalence from partial reasoning traces which realizes 0.87 AUROC on equivalence prediction, combined with an online greedy clustering algorithm that dynamically prunes redundant paths while preserving answer diversity. Comprehensive evaluations across three challenging benchmarks (AIME 2024, AIME 2025, and GPQA) and multiple reasoning models demonstrate that DeepPrune achieves remarkable token reduction by over 80% compared to conventional consensus sampling on most cases, while maintaining competitive accuracy within 3 percentage points. Our work establishes a new standard for efficient parallel reasoning, making high-performance reasoning more efficient. Our code and data are here: https://deepprune.github.io/
Community
Large language models (LLMs) often generate multiple reasoning traces in parallel to improve answer reliability (e.g. majority voting). However, these traces frequently exhibit severe inter-trace redundancy, leading to wasted computation and inflated inference costs.
DeepPrune addresses this by learning to identify and prune semantically redundant traces before full execution—enabling cost-effective parallel reasoning while preserving performance.
More details can be found in our website
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Deep Think with Confidence (2025)
- Slim-SC: Thought Pruning for Efficient Scaling with Self-Consistency (2025)
- Training Large Language Models To Reason In Parallel With Global Forking Tokens (2025)
- ParaThinker: Native Parallel Thinking as a New Paradigm to Scale LLM Test-time Compute (2025)
- From Long to Lean: Performance-aware and Adaptive Chain-of-Thought Compression via Multi-round Refinement (2025)
- SpecExit: Accelerating Large Reasoning Model via Speculative Exit (2025)
- FastMTP: Accelerating LLM Inference with Enhanced Multi-Token Prediction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper