ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

This program is tentative and subject to change.

Wed 19 Nov 2025 14:10 - 14:20 at Vista - Test Generation, Selection & Prioritization 2

Performance regressions have a tremendous impact on the quality of software. One way to catch regressions before they reach production is executing performance tests before deployment, e.g., using microbenchmarks, which measure performance at subroutine level. In projects with many microbenchmarks, this may take several hours due to repeated execution to get accurate results, disqualifying them from frequent use in CI/CD pipelines. We propose µOpTime, a static approach to reduce the execution time of microbenchmark suites by configuring the number of repetitions for each microbenchmark. Based on the results of a full, previous microbenchmark suite run, µOpTime determines the minimal number of (measurement) repetitions with statistical stability metrics that still lead to accurate results. We evaluate µOpTime with an experimental study on 14 open-source projects written in two programming languages and five stability metrics. Our results show that (i) µOpTime reduces the total suite execution time (measurement phase) by up to 95.83% (Go) and 94.17% (Java), (ii) the choice of stability metric depends on the project and programming language, (iii) microbenchmark warmup phases have to be considered for Java projects (potentially leading to higher reductions), and (iv) µOpTime can be used to reliably detect performance regressions in CI/CD pipelines.

This program is tentative and subject to change.

Wed 19 Nov

Displayed time zone: Seoul change

14:00 - 15:30
Test Generation, Selection & Prioritization 2Research Papers / Journal-First Track at Vista
14:00
10m
Talk
LLMs for Automated Unit Test Generation and Assessment in Java: The AgoneTest Framework
Research Papers
Andrea Lops Polytechnic University of Bari, Italy, Fedelucio Narducci Polytechnic University of Bari, Azzurra Ragone University of Bari, Michelantonio Trizio Wideverse, Claudio Bartolini Wideverse s.r.l.
14:10
10m
Talk
µOpTime: Statically Reducing the Execution Time of Microbenchmark Suites Using Stability Metrics
Journal-First Track
Nils Japke TU Berlin & ECDF, Martin Grambow TU Berlin & ECDF, Christoph Laaber Simula Research Laboratory, David Bermbach TU Berlin
14:20
10m
Talk
Reference-Based Retrieval-Augmented Unit Test Generation
Journal-First Track
Zhe Zhang Beihang University, Liu Xingyu Beihang University, Yuanzhang Lin Beihang University, Xiang Gao Beihang University, Hailong Sun Beihang University, Yuan Yuan Beihang University
14:30
10m
Talk
Using Active Learning to Train Predictive Mutation Testing with Minimal Data
Research Papers
Miklos Borsi Karlsruhe Institute of Technology
14:40
10m
Talk
Clarifying Semantics of In-Context Examples for Unit Test Generation
Research Papers
Chen Yang Tianjin University, Lin Yang Tianjin University, Ziqi Wang Tianjin University, Dong Wang Tianjin University, Jianyi Zhou Huawei Cloud Computing Technologies Co., Ltd., Junjie Chen Tianjin University
14:50
10m
Talk
An empirical study of test case prioritization on the Linux Kernel
Journal-First Track
Haichi Wang College of Intelligence and Computing, Tianjin University, Ruiguo Yu College of Intelligence and Computing, Tianjin University, Dong Wang Tianjin University, Yiheng Du College of Intelligence and Computing, Tianjin University, Yingquan Zhao Tianjin University, Junjie Chen Tianjin University, Zan Wang Tianjin University
15:00
10m
Talk
Automated Generation of Issue-Reproducing Tests by Combining LLMs and Search-Based Testing
Research Papers
Konstantinos Kitsios University of Zurich, Marco Castelluccio Mozilla, Alberto Bacchelli University of Zurich
Pre-print
15:10
10m
Talk
Using Fourier Analysis and Mutant Clustering to Accelerate DNN Mutation Testing
Research Papers
Ali Ghanbari Auburn University, Sasan Tavakkol Google Research
15:20
10m
Talk
WEST: Specification-Based Test Generation for WebAssembly
Research Papers