Deep Neural Networks (DNNs) have been widely deployed in software to address various tasks (e.g., autonomous driving, medical diagnosis). However, they can also produce incorrect behaviors that result in financial losses and even threaten human safety. To reveal and repair incorrect behaviors in DNNs, developers often collect rich, unlabeled datasets from the natural world and label them to test DNN models. However, properly labeling a large number of datasets is a highly expensive and time-consuming task.
To address the above-mentioned problem, we propose NSS, Neuron Sensitivity Guided Test Case Selection, which can reduce the labeling time by selecting valuable test cases from unlabeled datasets. NSS leverages the information of the internal neuron induced by the test cases to select valuable test cases, which have high confidence in causing the model to behave incorrectly. We evaluated NSS with four widely used datasets and four well-designed DNN models compared to the state-of-the-art (SOTA) baseline methods. The results show that NSS performs well in assessing the probability of failure triggering in test cases and in the improvement capabilities of the model. Specifically, compared to the baseline approaches, NSS achieves a higher fault detection rate (e.g., when selecting 5% of the test cases from the unlabeled dataset in the MNIST&LeNet1 experiment, NSS can obtain an 81.8% fault detection rate, which is a 20% increase compared with SOTA baseline strategies).
Tue 29 OctDisplayed time zone: Pacific Time (US & Canada) change
10:30 - 12:00 | Test selection and prioritizationResearch Papers / Journal-first Papers / NIER Track at Camellia Chair(s): Wing Lam George Mason University | ||
10:30 15mTalk | Towards Exploring the Limitations of Test Selection Techniques on Graph Neural Networks: An Empirical Study Journal-first Papers Xueqi Dang University of Luxembourg, SnT, Yinghua LI University of Luxembourg, Wei Ma Nanyang Technological University, Yuejun GUo Luxembourg Institute of Science and Technology, Qiang Hu The University of Tokyo, Mike Papadakis University of Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg Media Attached | ||
10:45 15mTalk | Prioritizing Test Cases for Deep Learning-based Video Classifiers Journal-first Papers Yinghua LI University of Luxembourg, Xueqi Dang University of Luxembourg, SnT, Lei Ma The University of Tokyo & University of Alberta, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg Media Attached | ||
11:00 15mTalk | Neuron Sensitivity Guided Test Case Selection Journal-first Papers Dong Huang The University of Hong Kong, Qingwen Bu Shanghai Jiao Tong University, Yichao FU The University of Hong Kong, Yuhao Qing University of Hong Kong, Xiaofei Xie Singapore Management University, Junjie Chen Tianjin University, Heming Cui University of Hong Kong | ||
11:15 15mTalk | FAST: Boosting Uncertainty-based Test Prioritization Methods for Neural Networks via Feature Selection Research Papers Jialuo Chen Zhejiang University, Jingyi Wang Zhejiang University, Xiyue Zhang University of Oxford, Youcheng Sun University of Manchester, Marta Kwiatkowska University of Oxford, Jiming Chen Zhejiang University; Hangzhou Dianzi University, Peng Cheng Zhejiang University | ||
11:30 15mTalk | Hybrid Regression Test Selection by Integrating File and Method Dependences Research Papers Guofeng Zhang College of Computer, National University of Defense Technology, Luyao Liu College of Computer, National University of Defense Technology, Zhenbang Chen College of Computer, National University of Defense Technology, Ji Wang National University of Defense Technology DOI Pre-print | ||
11:45 10mTalk | Prioritizing Tests for Improved Runtime NIER Track Abdelrahman Baz The University of Texas at Austin, Minchao Huang The University of Texas at Austin, August Shi The University of Texas at Austin |