ICST 2025
Mon 31 March - Fri 4 April 2025 Naples, Italy
Fri 4 Apr 2025 12:00 - 12:15 at Aula Magna (AM) - Automated Testing Chair(s): Cristian Cadar

Search-based software testing (SBST) is a widely-adopted technique for testing complex systems with large input spaces, such as Deep Learning-enabled (DL-enabled) systems. Many SBST techniques focus on Pareto-based optimization where multiple objectives are optimized in parallel to reveal failures. However, it is important to ensure that identified failures are spread throughout the entire failure-inducing area of a search domain, and not clustered in a sub-region. This ensures that identified failures are semantically diverse and reveal a wide range of underlying causes. In this paper, we present a theoretical argument explaining why testing based on Pareto optimization is inadequate for covering failure-inducing areas within a search domain. We support our argument with empirical results obtained by applying two widely used types of Pareto-based optimization techniques, namely NSGA-II (an evolutionary algorithm) and OMOPSO (a swarm-based algorithm), to two DL-enabled systems: an industrial Automated Valet Parking (AVP) system and a system for classifying handwritten digits. We measure the coverage of failure-revealing test inputs in the input space using a metric, that we refer to as the Coverage Inverted Distance (CID) quality indicator. Our results show that NSGA-II and OMOPSO are not more effective than a naïve random search baseline in covering test inputs that reveal failures. We show that this comparison remains valid for failure-inducing regions of various sizes of these two case studies. Further, we show that incorporating a diversity-focused fitness function as well as a repopulation operator in NSGA-II improves, on average, the coverage difference between NSGA-II and random search by 52.1%. However, even after diversification, NSGA-II still does not outperform random testing in covering test inputs that reveal failures. The replication package for this study is available in a GitHub repository (Replication package. https://github.com/ast-fortiss-tum/coverage-emse-24.

Fri 4 Apr

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:30
Automated TestingIndustry / Research Papers / Journal-First Papers / Education at Aula Magna (AM)
Chair(s): Cristian Cadar Imperial College London
11:00
15m
Talk
Testing Practices, Challenges, and Developer Perspectives in Open-Source IoT Platforms
Research Papers
Daniel Rodriguez-Cardenas William & Mary, Safwat Ali Khan George Mason University, Prianka Mandal William & Mary, Adwait Nadkarni William & Mary, Kevin Moran University of Central Florida, Denys Poshyvanyk William & Mary
Pre-print
11:15
15m
Talk
Many-Objective Neuroevolution for Testing Games
Research Papers
Patric Feldmeier University of Passau, Katrin Schmelz University of Passau, Gordon Fraser University of Passau
Pre-print
11:30
15m
Talk
Black-Box Testing for Practitioners
Education
Matthias Hamburg IEEE Computer Society; International Software Testing Qualifications Board, Adam Roman Jagiellonian University, Faculty of Mathematics and Computer Science; International Software Testing Qualifications Board
11:45
15m
Talk
CUBETESTERAI: Automated JUnit Test Generation using the LLaMA Model
Industry
Daniele Gorla Department of Computer Science, Sapienza University of Rome, Shivam Kumar , Pietro Nicolaus Roselli Lorenzini , Alireza Alipourfaz
12:00
15m
Talk
Can Search-Based Testing with Pareto Optimization Effectively Cover Failure-Revealing Test Inputs?
Journal-First Papers
Lev Sorokin Technische Universität München, Germany, Damir Safin fortiss, Shiva Nejati University of Ottawa
12:15
15m
Talk
[prerecorded] ADGE: Automated Directed GUI Explorer for Android Applications
Research Papers
Yue Jiang Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China, Xiaobo Xiang Singular Security Lab, Beijing, China, Qingli Guo Institute of Information Engineering, Chinese Academy of Sciences, Qi Gong Key Laboratory of Network Assessment Technology, Institute of Information Engineering, Chinese Academy of Sciences, China, Xiaorui Gong Institute of Information Engineering, Chinese Academy of Science
:
:
:
: