ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

While Code Language Models (CLMs) have demonstrated superior performance in software engineering tasks such as code generation and code summarization, recent empirical studies reveal a critical privacy vulnerability: these models exhibit unintended memorization of sensitive training data, enabling verbatim reproduction of confidential information when specifically prompted. To address this issue, several approaches, including dataset deduplication and differential privacy augmentation, have been proposed. However, these methods require full-model retraining for deployed CLMs, which incurs substantial computational costs. In this paper, we aim to answer the following research question: Can sensitive information memorized by CLMs be erased effectively and efficiently? We conduct a pioneering investigation into erasing sensitive memorization in CLMs through machine unlearning—a post hoc modification approach that removes specific information from trained models without requiring full retraining. Specifically, we first quantify the memorization risks of sensitive data within CLM training datasets and curate a high-risk dataset of 50,000 sensitive memorization samples by identifying and selecting vulnerable elements as unlearning targets. We investigate two widely-used gradient ascent-based unlearning approaches: the vanilla method and the constraint-based method, and introduce an advanced variant, termed CodeEraser, which selectively unlearns sensitive memorization elements in code while preserving the structural integrity and functional correctness of the surrounding code. Extensive experiments on three families of CLMs, i.e., CodeParrot, CodeGen-Mono and Qwen2.5-Coder, validate the effectiveness and efficiency of CodeEraser in erasing targeted sensitive memorization while maintaining model utility.