ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Tue 29 Oct 2024 10:45 - 11:00 at Camellia - Test selection and prioritization Chair(s): Wing Lam

The widespread adoption of video-based applications across various fields highlights their importance in modern software systems. However, in comparison to images or text, labelling video test cases for the purpose of assessing system accuracy can lead to increased expenses due to their temporal structure and larger volume. Test prioritization has emerged as a promising approach to mitigate the labeling cost, which prioritizes potentially misclassified test inputs so that such inputs can be identified earlier with limited time and manual labeling efforts. However, applying existing prioritization techniques to video test cases faces certain limitations: they do not account for the unique temporal information present in video data. Unlike static image datasets that only contain spatial information, video inputs consist of multiple frames that capture the dynamic changes of objects over time. In this paper, we propose VRank, the first test prioritization approach designed specifically for video test inputs. The fundamental idea behind VRank is that video-type tests with a higher probability of being misclassified by the evaluated DNN classifier are considered more likely to reveal faults and will be prioritized higher. To this end, we train a ranking model with the aim of predicting the probability of a given test input being misclassified by a DNN classifier. This prediction relies on four types of generated features: temporal features (TF), video embedding features (EF), prediction features (PF), and uncertainty features (UF). We rank all test inputs in the target test set based on their misclassification probabilities. Videos with a higher likelihood of being misclassified will be prioritized higher. We conducted an empirical evaluation to assess the performance of VRank, involving 120 subjects with both natural and noisy datasets. The experimental results reveal VRank outperforms all compared test prioritization methods, with an average improvement of 5.76%~46.51% on natural datasets and 4.26%~53.56% on noisy datasets.

This program is tentative and subject to change.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
Test selection and prioritizationResearch Papers / Journal-first Papers / NIER Track at Camellia
Chair(s): Wing Lam George Mason University
10:30
15m
Talk
Towards Exploring the Limitations of Test Selection Techniques on Graph Neural Networks: An Empirical Study
Journal-first Papers
Xueqi Dang University of Luxembourg, SnT, Yinghua LI University of Luxembourg, Wei Ma Nanyang Technological University, Yuejun GUo Luxembourg Institute of Science and Technology, Qiang Hu The University of Tokyo, Mike Papadakis University of Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg
Media Attached
10:45
15m
Talk
Prioritizing Test Cases for Deep Learning-based Video Classifiers
Journal-first Papers
Yinghua LI University of Luxembourg, Xueqi Dang University of Luxembourg, SnT, Lei Ma The University of Tokyo & University of Alberta, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg
Media Attached
11:00
15m
Talk
Neuron Sensitivity Guided Test Case Selection
Journal-first Papers
Dong Huang The University of Hong Kong, Qingwen Bu Shanghai Jiao Tong University, Yichao FU The University of Hong Kong, Yuhao Qing University of Hong Kong, Xiaofei Xie Singapore Management University, Junjie Chen Tianjin University, Heming Cui University of Hong Kong
11:15
15m
Talk
FAST: Boosting Uncertainty-based Test Prioritization Methods for Neural Networks via Feature Selection
Research Papers
Jialuo Chen Zhejiang University, Jingyi Wang Zhejiang University, Xiyue Zhang University of Oxford, Youcheng Sun University of Manchester, Marta Kwiatkowska University of Oxford, Jiming Chen Zhejiang University; Hangzhou Dianzi University, Peng Cheng Zhejiang University
11:30
15m
Talk
Hybrid Regression Test Selection by Integrating File and Method Dependences
Research Papers
Guofeng Zhang College of Computer, National University of Defense Technology, Luyao Liu College of Computer, National University of Defense Technology, Zhenbang Chen College of Computer, National University of Defense Technology, Ji Wang National University of Defense Technology
DOI Pre-print
11:45
10m
Talk
Prioritizing Tests for Improved Runtime
NIER Track
Abdelrahman Baz The University of Texas at Austin, Minchao Huang The University of Texas at Austin, August Shi The University of Texas at Austin
Hide past events