FSE 2025
Mon 23 - Fri 27 June 2025 Trondheim, Norway
co-located with ISSTA 2025
Wed 25 Jun 2025 11:00 - 11:20 at Cosmos 3B - LLM and Prompt Chair(s): Giuseppe Scanniello

Motivation. Large language models (LLMs) have exhibited remarkable proficiency in diverse software engineering (SE) tasks, such as code summarization, code translation, and code search. Handling such tasks typically involves acquiring foundational coding knowledge on large, general-purpose datasets during a pre-training phase, and subsequently refining on smaller, task-specific datasets as part of a fine-tuning phase.

Problem statement. Data leakage i.e., using information of the test set to perform the model training, is a well-known issue in training of machine learning models. A manifestation of this issue is the intersection of the training and testing splits. While intra-dataset code duplication examines this intersection within a given dataset and has been addressed in prior research, inter-dataset code duplication, which gauges the overlap between different datasets, remains largely unexplored. If this phenomenon exists, it could compromise the integrity of LLM evaluations because of the inclusion of fine-tuning test samples that were already encountered during pre-training, resulting in inflated performance metrics.

Contribution. This paper explores the phenomenon of inter-dataset code duplication and its impact on evaluating LLMs across diverse SE tasks.

Study design. We conduct an empirical study using the CodeSearchNet dataset (CSN), a widely adopted pre-training dataset, and five fine-tuning datasets used for various SE tasks. We first identify the intersection between the pre-training and fine-tuning datasets using a deduplication process. Next, we pre-train two versions of LLMs using a subset of CSN: one leaky LLM, which includes the identified intersection in its pre-training set, and one non-leaky LLM that excludes these samples. Finally, we fine-tune both models and compare their performances using fine-tuning test samples that are part of the intersection.

Results. Our findings reveal a potential threat to the evaluation of LLMs across multiple SE tasks, stemming from the inter-dataset code duplication phenomenon. We also demonstrate that this threat is accentuated by the chosen fine-tuning technique. Furthermore, we provide evidence that open-source models such as CodeBERT, GraphCodeBERT, and UnixCoder could be affected by inter-dataset duplication. Based on our findings, we delve into prior research that may be susceptible to this threat. Additionally, we offer guidance to SE researchers on strategies to prevent inter-dataset code duplication.

Wed 25 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:30
11:00
20m
Talk
On Inter-dataset Code Duplication and Data Leakage in Large Language Models
Journal First
José Antonio Hernández López Linköping University, Boqi Chen McGill University, Mootez Saad Dalhousie University, Tushar Sharma Dalhousie University, Daniel Varro Linköping University / McGill University
11:20
20m
Talk
LLM App Squatting and Cloning
Industry Papers
Yinglin Xie Huazhong University of Science and Technology, Xinyi Hou Huazhong University of Science and Technology, Yanjie Zhao Huazhong University of Science and Technology, Kai Chen Huazhong University of Science and Technology, Haoyu Wang Huazhong University of Science and Technology
11:40
10m
Talk
Predictive Prompt Analysis
Ideas, Visions and Reflections
11:50
20m
Talk
From Prompts to Templates: A Systematic Prompt Template Analysis for Real-world LLMapps
Industry Papers
Yuetian Mao Technical University of Munich, Junjie He Technical University of Munich, Chunyang Chen TU Munich
12:10
20m
Talk
Prompts Are Programs Too! Understanding How Developers Build Software Containing Prompts
Research Papers
Jenny T. Liang Carnegie Mellon University, Melissa Lin Carnegie Mellon University, Nikitha Rao Carnegie Mellon University, Brad A. Myers Carnegie Mellon University
DOI

Information for Participants
Wed 25 Jun 2025 11:00 - 12:30 at Cosmos 3B - LLM and Prompt Chair(s): Giuseppe Scanniello
Info for room Cosmos 3B:

Cosmos 3B is the second room in the Cosmos 3 wing.

When facing the main Cosmos Hall, access to the Cosmos 3 wing is on the left, close to the stairs. The area is accessed through a large door with the number “3”, which will stay open during the event.

:
:
:
: