Large language models have gained significant popularity due to their ability to generate human-like text and their potential applications in various fields, such as Software Engineering. Large language models for code are commonly trained on large unsanitised corpora of source code scraped from the Internet. The content of these datasets is memorised and can be extracted by attackers with data extraction attacks. In this work, we explore memorisation in large language models for code and compare the rate of memorisation with large language models trained on natural language. We adopt an existing benchmark for natural language and construct a benchmark for code by identifying samples that are vulnerable to attack. We run both benchmarks against a variety of models, and perform a data extraction attack. We find that large language models for code are vulnerable to data extraction attacks, like their natural language counterparts. From the training data that was identified to be potentially extractable we were able to extract 47% from a CodeGen-Mono-16B code completion model. We also observe that models memorise more, as their parameter count grows, and that their pre-training data is also vulnerable to attack. We also find that data carriers are memorised at a higher rate than regular code or documentation and that different model architectures memorise different samples. Data leakage has severe outcomes, so we urge the research community to further investigate the extent of this phenomenon using a wider range of models and extraction techniques in order to build safeguards to mitigate this issue.
Fri 19 AprDisplayed time zone: Lisbon change
16:00 - 17:30 | Language Models and Generated Code 4New Ideas and Emerging Results / Research Track at Almada Negreiros Chair(s): Shin Yoo Korea Advanced Institute of Science and Technology | ||
16:00 15mTalk | Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code Research Track Rangeet Pan IBM Research, Ali Reza Ibrahimzada University of Illinois Urbana-Champaign, Rahul Krishna IBM Research, Divya Sankar IBM Research, Lambert Pouguem Wassi IBM Research, Michele Merler IBM Research, Boris Sobolev IBM Research, Raju Pavuluri IBM T.J. Watson Research Center, Saurabh Sinha IBM Research, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign DOI Pre-print Media Attached | ||
16:15 15mTalk | Traces of Memorisation in Large Language Models for Code Research Track Ali Al-Kaswan Delft University of Technology, Netherlands, Maliheh Izadi Delft University of Technology, Arie van Deursen Delft University of Technology Pre-print | ||
16:30 15mTalk | Language Models for Code Completion: A Practical Evaluation Research Track Maliheh Izadi Delft University of Technology, Jonathan Katzy Delft University of Technology, Tim van Dam Delft University of Technology, Marc Otten Delft University of Technology, Răzvan Mihai Popescu Delft University of Technology, Arie van Deursen Delft University of Technology Pre-print | ||
16:45 15mTalk | Evaluating Large Language Models in Class-Level Code Generation Research Track Xueying Du Fudan University, Mingwei Liu Fudan University, Kaixin Wang Fudan University, Hanlin Wang Fudan University, Junwei Liu Huazhong University of Science and Technology, Yixuan Chen Fudan University, Jiayi Feng Fudan University, Chaofeng Sha Fudan University, Xin Peng Fudan University, Yiling Lou Fudan University Pre-print | ||
17:00 7mTalk | Naturalness of Attention: Revisiting Attention in Code Language Models New Ideas and Emerging Results Pre-print | ||
17:07 7mTalk | Towards Trustworthy AI Software Development Assistance New Ideas and Emerging Results DOI Pre-print |