Scrub It Out! Erasing Sensitive Memorization in Code Language Models via Machine Unlearning
Abstract
CodeEraser effectively and efficiently removes sensitive memorized information from Code Language Models using machine unlearning techniques without full retraining.
While Code Language Models (CLMs) have demonstrated superior performance in software engineering tasks such as code generation and summarization, recent empirical studies reveal a critical privacy vulnerability: these models exhibit unintended memorization of sensitive training data, enabling verbatim reproduction of confidential information when specifically prompted. To address this issue, several approaches, including training data de-duplication and differential privacy augmentation, have been proposed. However, these methods require full-model retraining for deployed CLMs, which incurs substantial computational costs. In this paper, we aim to answer the following research question: Can sensitive information memorized by CLMs be erased effectively and efficiently? We conduct a pioneering investigation into erasing sensitive memorization in CLMs through machine unlearning - a post-hoc modification method that removes specific information from trained models without requiring full retraining. Specifically, we first quantify the memorization risks of sensitive data within CLM training datasets and curate a high-risk dataset of 50,000 sensitive memorized samples as unlearning targets. We study two widely used gradient ascent-based unlearning approaches: the vanilla and constraint-based methods, and introduce CodeEraser, an advanced variant that selectively unlearns sensitive memorized segments in code while preserving the structural integrity and functional correctness of the surrounding code. Extensive experiments on three families of CLMs, i.e., CodeParrot, CodeGen-Mono, and Qwen2.5-Coder, validate the effectiveness and efficiency of CodeEraser in erasing targeted sensitive memorization while maintaining model utility.
Community
Our new ICSE 2026 paper! ✨✨
😱 Code LLMs don’t just learn code — they memorize your secrets!
📊 Our analysis: ~7% of training samples contain sensitive data (API keys, tokens, and credentials) memorized by models like CodeParrot & CodeGen.
💡 So, can they “unlearn”?
🔥 Yes! CodeEraser → Forget secrets, keep coding skills!
🧹 Scrubs sensitive spans only
Experiments on Qwen2.5-Coder-7B:
⚡ Secret recall dropped (~94%)
🎯 Coding ability preserved (~99%)
⏱️ Fast (~47s/sample)
📦 Open-source: CodeEraser (GitHub)
🤖 Privacy meets practicality in Code LLMs!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models (2025)
- Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning (2025)
- LLM Unlearning using Gradient Ratio-Based Influence Estimation and Noise Injection (2025)
- iShumei-Chinchunmei at SemEval-2025 Task 4: A balanced forgetting and retention multi-task framework using effective unlearning loss (2025)
- Unveiling Over-Memorization in Finetuning LLMs for Reasoning Tasks (2025)
- Memorization in Fine-Tuned Large Language Models (2025)
- Reliable Unlearning Harmful Information in LLMs with Metamorphosis Representation Projection (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper