ICST 2025
Mon 31 March - Fri 4 April 2025 Naples, Italy

This program is tentative and subject to change.

Wed 2 Apr 2025 11:15 - 11:30 at Aula Magna (AM) - LLMs in Testing Chair(s): Phil McMinn

Identifying the point of error is imperative in software debugging. Traditional fault localization (FL) techniques rely on executing the program and using the code coverage matrix in tandem with test case results to calculate a suspiciousness score for each method or line. Recently, learning-based FL techniques have harnessed machine learning models to extract meaningful features from the code coverage matrix and improve FL performance. These techniques, however, require compilable source code, existing test cases, and specialized tools for generating the code coverage matrix for each programming language of interest.

In this paper, we propose, for the first time, a simple but effective sequence generation approach for fine-tuning large language models of code (LLMCs) for FL tasks. LLMCs have recently received much attention for various software engineering problems such as code completion, summarization, translation, and refinement. In line with these, we leverage the innate understanding of code that LLMCs have acquired through pre-training on large code corpora. Specifically, we fine-tune representative encoder, encoder-decoder, and decoder-based 13 LLMCs (across 7 different architectures) for FL tasks. Unlike previous approaches, LLMCs can analyze code sequences even with syntactic errors, since they do not rely on compiled input. Still, they have a limitation on the length of the input data. Therefore, for a fair comparison with existing FL techniques, we extract methods with errors from the project-level benchmark, Defects4J, and analyze them at the line level. Experimental results show that LLMCs fine-tuned with our approach successfully pinpoint error positions in 50.6%, 64.2%, and 72.3% of 1,291 methods in Defects4J for Top-1/3/5 prediction, outperforming the best learning-based state-of-the-art technique by up to 1.35, 1.12, and 1.08 times, respectively. We also conduct an in-depth investigation of key factors that may affect the FL performance of LLMCs. Our findings suggest promising research directions for FL and automated program repair tasks using LLMCs.

This program is tentative and subject to change.

Wed 2 Apr

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:30
LLMs in TestingResearch Papers / Industry / Journal-First Papers at Aula Magna (AM)
Chair(s): Phil McMinn University of Sheffield
11:00
15m
Talk
AugmenTest: Enhancing Tests with LLM-driven Oracles
Research Papers
Shaker Mahmud Khandaker Fondazione Bruno Kessler, Fitsum Kifetew Fondazione Bruno Kessler, Davide Prandi Fondazione Bruno Kessler, Angelo Susi Fondazione Bruno Kessler
Pre-print
11:15
15m
Talk
Impact of Large Language Models of Code on Fault Localization
Research Papers
Suhwan Ji Yonsei University, Sanghwa Lee Kangwon National University, Changsup Lee Kangwon National University, Yo-Sub Han Yonsei University, Hyeonseung Im Kangwon National University, South Korea
11:30
15m
Talk
An Analysis of LLM Fine-Tuning and Few-Shot Learning for Flaky Test Detection and Classification
Research Papers
Riddhi More Ontario Tech University, Jeremy Bradbury Ontario Tech University
11:45
15m
Talk
Evaluating the Effectiveness of LLMs in Detecting Security Vulnerabilities
Research Papers
Avishree Khare , Saikat Dutta Cornell University, Ziyang Li University of Pennsylvania, Alaia Solko-Breslin University of Pennsylvania, Mayur Naik UPenn, Rajeev Alur University of Pennsylvania
12:00
15m
Talk
FlakyFix: Using Large Language Models for Predicting Flaky Test Fix Categories and Test Code Repair
Journal-First Papers
Sakina Fatima University of Ottawa, Hadi Hemmati York University, Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland
12:15
15m
Talk
Integrating LLM-based Text Generation with Dynamic Context Retrieval for GUI Testing
Industry
Juyeon Yoon Korea Advanced Institute of Science and Technology, Seah Kim Samsung Research, Somin Kim Korea Advanced Institute of Science and Technology, Sukchul Jung Samsung Research, Shin Yoo Korea Advanced Institute of Science and Technology
:
:
:
: