ICST 2025
Mon 31 March - Fri 4 April 2025 Naples, Italy
Wed 2 Apr 2025 11:30 - 11:45 at Aula Magna (AM) - LLMs in Testing Chair(s): Phil McMinn

Flaky tests exhibit non-deterministic behavior during execution and they may pass or fail without any changes to the program under test. Detecting and classifying these flaky tests is crucial for maintaining the robustness of automated test suites and ensuring the overall reliability and confidence in the testing. However, flaky test detection and classification is challenging due to the variability in test behavior, which can depend on environmental conditions and subtle code interactions. Large Language Models (LLMs) offer promising approaches to address this challenge, with fine-tuning and few-shot learning (FSL) emerging as viable techniques. With enough data fine-tuning a pre-trained LLM can achieve high accuracy, making it suitable for organizations with more resources. Alternatively, we introduce FlakyXbert, an FSL approach that employs a Siamese network architecture to train efficiently with limited data. To understand the performance and cost differences between these two methods, we compare fine-tuning on larger datasets with FSL in scenarios restricted by smaller datasets. Our evaluation involves two existing flaky test datasets, FlakyCat and IDoFT. Our results suggest that while fine-tuning can achieve high accuracy, FSL provides a cost-effective approach with competitive accuracy, which is especially beneficial for organizations or projects with limited historical data available for training. These findings underscore the viability of both fine-tuning and FSL in flaky test detection and classification with each suited to different organizational needs and resource availability.

Wed 2 Apr

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:30
LLMs in TestingResearch Papers / Industry / Journal-First Papers at Aula Magna (AM)
Chair(s): Phil McMinn University of Sheffield
11:00
15m
Talk
AugmenTest: Enhancing Tests with LLM-driven Oracles
Research Papers
Shaker Mahmud Khandaker Fondazione Bruno Kessler, Fitsum Kifetew Fondazione Bruno Kessler, Davide Prandi Fondazione Bruno Kessler, Angelo Susi Fondazione Bruno Kessler
Pre-print
11:15
15m
Talk
Impact of Large Language Models of Code on Fault Localization
Research Papers
Suhwan Ji Yonsei University, Sanghwa Lee Kangwon National University, Changsup Lee Kangwon National University, Yo-Sub Han Yonsei University, Hyeonseung Im Kangwon National University, South Korea
11:30
15m
Talk
An Analysis of LLM Fine-Tuning and Few-Shot Learning for Flaky Test Detection and Classification
Research Papers
Riddhi More Ontario Tech University, Jeremy Bradbury Ontario Tech University
11:45
15m
Talk
Evaluating the Effectiveness of LLMs in Detecting Security Vulnerabilities
Research Papers
Avishree Khare , Saikat Dutta Cornell University, Ziyang Li University of Pennsylvania, Alaia Solko-Breslin University of Pennsylvania, Mayur Naik UPenn, Rajeev Alur University of Pennsylvania
12:00
15m
Talk
FlakyFix: Using Large Language Models for Predicting Flaky Test Fix Categories and Test Code Repair
Journal-First Papers
Sakina Fatima University of Ottawa, Hadi Hemmati York University, Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland
12:15
15m
Talk
Integrating LLM-based Text Generation with Dynamic Context Retrieval for GUI Testing
Industry
Juyeon Yoon Korea Advanced Institute of Science and Technology, Seah Kim Samsung Research, Somin Kim Korea Advanced Institute of Science and Technology, Sukchul Jung Samsung Research, Shin Yoo KAIST
:
:
:
: