TCSE logo 
 Sigsoft logo
Sustainability badge

A critical part of creating code suggestion systems is the pre-training of Large Language Models (LLMs) on vast amounts of source code and natural language text, often of questionable origin, quality, or compliance. This may contribute to the presence of bugs and vulnerabilities in code generated by LLMs. While efforts to identify bugs at or after code generation exist, it is preferable to pre-train or fine-tune LLMs on curated, high-quality, and compliant datasets. The need for vast amounts of training data necessitates that such curation be automated, minimizing human intervention.

We propose an automated source code autocuration technique that leverages the complete version history of open-source software (OSS) projects to improve the quality of training data. The proposed approach leverages the version history of all OSS projects to: (1) identify training data samples that have ever been modified, (2) detect samples that have undergone changes in at least one OSS project, and (3) pinpoint a subset of samples that include fixes for bugs or vulnerabilities. We evaluate this method using ``the Stack'' v2 dataset, comprising almost 600M code samples, and find that 17% of the code versions in the dataset have newer versions, with 17% of those representing bug fixes, including 2.36% addressing known CVEs. The clean, deduplicated version of Stack v2 still includes blobs vulnerable to 6,947 known CVEs. Furthermore, 58% of the blobs in the dataset were never modified after creation, suggesting they likely represent software with minimal or no use. Misidentified blob origins present an additional challenge, as they lead to the inclusion of non-permissively licensed code, raising serious compliance concerns.

By deploying these fixes and addressing compliance issues, the training of new models can avoid perpetuating buggy code patterns or license violations. We expect our results to inspire process improvements for automated data curation, a critical component of AI engineering, with the potential to significantly enhance the quality and reliability of outputs generated by AI tools.

Sat 3 May

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 17:30
Paper Session 4 / Virtual Talk / Award Session & ClosingLLM4Code at 214
Chair(s): Lingming Zhang University of Illinois at Urbana-Champaign
16:00
10m
Talk
Cracks in The Stack: Hidden Vulnerabilities and Licensing Risks in LLM Pre-Training Datasets
LLM4Code
Mahmoud Jahanshahi University of Tennessee, Audris Mockus University of Tennessee
Pre-print
16:10
10m
Talk
Understanding Code Properties: Is Code All You Need?
LLM4Code
Srivishnu Pyda University of Maryland, Daniel Nichols University of Maryland, Abhinav Bhatele University of Maryland
16:20
10m
Talk
Analysis of Student-LLM Interaction in a Software Engineering Project
LLM4Code
Agrawal Naman National University of Singapore, Ridwan Salihin Shariffdeen National University of Singapore, Wang Guanlin National University of Singapore, Sanka Rasnayaka National University of Singapore, Ganesh Neelakanta Iyer National University of Singapore
16:30
10m
Talk
Training LLMs for Generating IEC 61131-3 Structured Text with Online Feedback
LLM4Code
Aaron Haag Siemens AG, Bertram Fuchs Siemens AG, Altay Kacan Siemens AG, Oliver Lohse Siemens AG
16:40
10m
Talk
Deriving Coding-Specific Sub-Models from LLMs using Resource-Efficient Pruning (Virtual Talk)
LLM4Code
Laura Puccioni Spotify, Alireza Farshin NVIDIA, Mariano Scazzariello RISE Research Institutes of Sweden, Changjie Wang KTH Royal Institute of Technology, Marco Chiesa KTH Royal Institute of Technology, Dejan Kostic KTH Royal Institute of Technology
Media Attached
16:40
10m
Talk
Is More or Less Automation Better? An Investigation into the LLM4TDD Process (Virtual Talk)
LLM4Code
Sanyogita Piya The University of Texas at Arlington, Anahita Samadi The University of Texas at Arlington, Allison Sullivan University of Texas at Arlington
16:40
10m
Talk
Knowledge Graph Based Repository-Level Code Generation (Virtual Talk)
LLM4Code
Mihir Athale Northeastern University, Vishal Vaddina Quantiphi Inc.
Pre-print Media Attached
16:40
10m
Talk
Leveraging LLMs for Legacy Code Modernization: Evaluation of LLM-Generated Documentation (Virtual Talk)
LLM4Code
Colin Diggs MITRE Corporation, Michael Doyle MITRE Corporation, Amit Madan MITRE Corporation, Emily Escamilla MITRE Corporation, Siggy Scott MITRE Corporation, Jacob Zimmer MITRE Corporation, Naveed Nekoo MITRE Corporation, Paul Ursino MITRE Corporation, Michael Bartholf MITRE Corporation, Zachary Robin MITRE Corporation, Anand Patel MITRE Corporation, Chris Glasz MITRE Corporation, William Macke MITRE Corporation, Paul Kirk MITRE Corporation, Jasper Phillips MITRE Corporation, Arun Sridharan MITRE Corporation, Doug Wendt MITRE Corporation, Scott Rosen MITRE Corporation, Nitin Naik MITRE Corporation, Justin F. Brunelle MITRE Corporation, Samruddhi Thaker MITRE Corporation
Media Attached
16:40
10m
Talk
From Theory to Practice: Code Generation Using LLMs for CAPEC and CWE Frameworks (Virtual Talk)
LLM4Code
Mohammed Murtuza Shahzad Syed Northern Illinois University, Joseph Wilson Northern Illinois University, Ibrahim Al Azher Northern Illinois University, Hamed Alhoori Dept. of Computer Science at the Northern Illinois University, Mona Rahimi Dept. of Computer Science at the Northern Illinois University
Media Attached
16:40
10m
Talk
Hierarchical Repository-Level Code Summarization for Business Applications Using Local LLMs (Virtual Talk)
LLM4Code
Nilesh Dhulshette TCS Research, Sapan Shah TCS Research, Vinay Kulkarni Tata Consultancy Services Research
Media Attached
16:40
10m
Talk
Code Summarization Beyond Function Level (Virtual Talk)
LLM4Code
Vladimir Makharev Innopolis University, AIRI, Vladimir Ivanov Innopolis University
Media Attached
16:40
10m
Talk
YABLoCo: Yet Another Benchmark for Long Context Code Generation (Virtual Talk)
LLM4Code
Aidar Valeev Innopolis University, Vladimir Ivanov Innopolis University, Roman Garaev Innopolis University, Vadim Lomshakov JetBrains, Irina Pionkovskaya Huawei Noah's Ark Lab, Israel Adewuyi Innopolis University
16:40
10m
Talk
CoCoNUT: Structural Code Understanding does not fall out of a tree (Virtual Talk)
LLM4Code
Claas Beger Cornell University, Saikat Dutta Cornell University
Pre-print Media Attached
16:40
10m
Talk
Do Code LLMs Understand Design Patterns? (Virtual Talk)
LLM4Code
Zhenyu Pan Northwestern University, Xuefeng Song Northwestern University, Yunkun Wang Zhejiang University, Rongyu Cao Tongyi Lab, Alibaba, China, Binhua Li Tongyi Lab, Alibaba, China, Yongbin Li Tongyi Lab, Alibaba, China, Han Liu Northwestern University
Media Attached
16:40
10m
Talk
From Scientific Texts to Verifiable Code: Automating the Process with Transformers (Virtual Talk)
LLM4Code
Changjie Wang KTH Royal Institute of Technology, Mariano Scazzariello RISE Research Institutes of Sweden, Marco Chiesa KTH Royal Institute of Technology
Media Attached
16:50
10m
Day closing
Award Session & Closing
LLM4Code

:
:
:
: