Papers
arxiv:2509.13755

Scrub It Out! Erasing Sensitive Memorization in Code Language Models via Machine Unlearning

Published on Sep 17
· Submitted by Zhaoyang Chu on Sep 18
Authors:
,
,
,
,
,
,
,
,
,

Abstract

CodeEraser effectively and efficiently removes sensitive memorized information from Code Language Models using machine unlearning techniques without full retraining.

AI-generated summary

While Code Language Models (CLMs) have demonstrated superior performance in software engineering tasks such as code generation and summarization, recent empirical studies reveal a critical privacy vulnerability: these models exhibit unintended memorization of sensitive training data, enabling verbatim reproduction of confidential information when specifically prompted. To address this issue, several approaches, including training data de-duplication and differential privacy augmentation, have been proposed. However, these methods require full-model retraining for deployed CLMs, which incurs substantial computational costs. In this paper, we aim to answer the following research question: Can sensitive information memorized by CLMs be erased effectively and efficiently? We conduct a pioneering investigation into erasing sensitive memorization in CLMs through machine unlearning - a post-hoc modification method that removes specific information from trained models without requiring full retraining. Specifically, we first quantify the memorization risks of sensitive data within CLM training datasets and curate a high-risk dataset of 50,000 sensitive memorized samples as unlearning targets. We study two widely used gradient ascent-based unlearning approaches: the vanilla and constraint-based methods, and introduce CodeEraser, an advanced variant that selectively unlearns sensitive memorized segments in code while preserving the structural integrity and functional correctness of the surrounding code. Extensive experiments on three families of CLMs, i.e., CodeParrot, CodeGen-Mono, and Qwen2.5-Coder, validate the effectiveness and efficiency of CodeEraser in erasing targeted sensitive memorization while maintaining model utility.

Community

Paper submitter
edited 5 days ago

Our new ICSE 2026 paper! ✨✨
😱 Code LLMs don’t just learn code — they memorize your secrets!
📊 Our analysis: ~7% of training samples contain sensitive data (API keys, tokens, and credentials) memorized by models like CodeParrot & CodeGen.

💡 So, can they “unlearn”?
🔥 Yes! CodeEraser → Forget secrets, keep coding skills!
🧹 Scrubs sensitive spans only

Experiments on Qwen2.5-Coder-7B:
⚡ Secret recall dropped (~94%)
🎯 Coding ability preserved (~99%)
⏱️ Fast (~47s/sample)
📦 Open-source: CodeEraser (GitHub)

🤖 Privacy meets practicality in Code LLMs!

Memorization.jpg

Unlearning.jpg

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.13755 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.13755 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.13755 in a Space README.md to link it from this page.

Collections including this paper 3