Revisiting Neuron Coverage for DNN Testing: A Layer-Wise and Distribution-Aware Criterion
Various deep neural network (DNN) coverage criteria have been proposed to assess DNN test inputs and steer input mutations. The coverage is characterized via neurons having certain outputs, or the discrepancy between neuron outputs. Nevertheless, recent research indicates that neuron coverage criteria show little correlation with test suite quality.
In general, DNNs approximate distributions, by incorporating hierarchical layers, to make predictions for inputs. Thus, we champion to deduce DNN behaviors based on its approximated distributions from a layer perspective. A test suite should be assessed using its induced layer output distributions. Accordingly, to fully examine DNN behaviors, input mutation should be directed toward diversifying the approximated distributions.
This paper summarizes eight design requirements for DNN coverage criteria, taking into account distribution properties and practical concerns. We then propose a new criterion, NeuraL Coverage (NLC), that satisfies all design requirements. NLC treats a single DNN layer as the basic computational unit (rather than a single neuron) and captures four critical properties of neuron output distributions. Thus, NLC accurately describes how DNNs comprehend inputs via approximated distributions. We demonstrate that NLC is significantly correlated with the diversity of a test suite across a number of tasks (classification and generation) and data formats (image and text). Its capacity to discover DNN prediction errors is promising. Test input mutation guided by NLC results in a greater quality and diversity of exposed erroneous behaviors.
Thu 18 MayDisplayed time zone: Hobart change
11:00 - 12:30 | AI testing 1Technical Track / DEMO - Demonstrations / Journal-First Papers at Meeting Room 102 Chair(s): Matthew B Dwyer University of Virginia | ||
11:00 15mTalk | When and Why Test Generators for Deep Learning Produce Invalid Inputs: an Empirical Study Technical Track Pre-print | ||
11:15 15mTalk | Fuzzing Automatic Differentiation in Deep-Learning Libraries Technical Track Chenyuan Yang University of Illinois at Urbana-Champaign, Yinlin Deng University of Illinois at Urbana-Champaign, Jiayi Yao The Chinese University of Hong Kong, Shenzhen, Yuxing Tu Huazhong University of Science and Technology, Hanchi Li University of Science and Technology of China, Lingming Zhang University of Illinois at Urbana-Champaign | ||
11:30 15mTalk | Lightweight Approaches to DNN Regression Error Reduction: An Uncertainty Alignment Perspective Technical Track Zenan Li Nanjing University, China, Maorun Zhang Nanjing University, China, Jingwei Xu , Yuan Yao Nanjing University, Chun Cao Nanjing University, Taolue Chen Birkbeck University of London, Xiaoxing Ma Nanjing University, Jian Lu Nanjing University Pre-print | ||
11:45 7mTalk | DeepJudge: A Testing Framework for Copyright Protection of Deep Learning Models DEMO - Demonstrations Jialuo Chen Zhejiang University, Youcheng Sun The University of Manchester, Jingyi Wang Zhejiang University, Peng Cheng Zhejiang University, Xingjun Ma Deakin University | ||
11:52 7mTalk | DeepCrime: from Real Faults to Mutation Testing Tool for Deep Learning DEMO - Demonstrations | ||
12:00 7mTalk | DiverGet: a Search-Based Software Testing approach for Deep Neural Network Quantization assessment Journal-First Papers Ahmed Haj Yahmed École Polytechnique de Montréal, Houssem Ben Braiek École Polytechnique de Montréal, Foutse Khomh Polytechnique Montréal, Sonia Bouzidi National Institute of Applied Science and Technology, Rania Zaatour Potsdam Institute for Climate Impact Research | ||
12:07 15mTalk | Revisiting Neuron Coverage for DNN Testing: A Layer-Wise and Distribution-Aware Criterion Technical Track Yuanyuan Yuan The Hong Kong University of Science and Technology, Qi Pang HKUST, Shuai Wang Hong Kong University of Science and Technology |