Exposing Previously Undetectable Faults in Deep Neural Networks
Thu 15 Jul 2021 09:30 - 09:50 at ISSTA 1 - Session 9 (time band 3) Testing Deep Learning Systems 3 Chair(s): Mauro Pezze
Existing methods for testing DNNs solve the oracle problem by constraining the raw features (e.g. image pixel values) to be within a small distance of a dataset example for which the desired DNN output is known. But this limits the kinds of faults these approaches are able to detect. In this paper, we introduce a novel DNN testing method that is able to find faults in DNNs that other methods cannot. The crux is that, by leveraging generative machine learning, we can generate fresh test inputs that vary in their high-level features (for images, these include object shape, location, texture, and colour). We demonstrate that our approach is capable of detecting deliberately injected faults as well as new faults in state-of-the-art DNNs, and that in both cases, existing methods are unable to find these faults.
Wed 14 JulDisplayed time zone: Brussels, Copenhagen, Madrid, Paris change
18:30 - 19:30 | Session 2 (time band 1) Testing Deep Learning Systems 1Technical Papers at ISSTA 2 Chair(s): Lin Tan Purdue University | ||
18:30 20mTalk | Attack as Defense: Characterizing Adversarial Examples using Robustness Technical Papers Zhe Zhao ShanghaiTech University, Guangke Chen ShanghaiTech University, Jingyi Wang Zhejiang University, Yiwei Yang ShanghaiTech University, Fu Song ShanghaiTech University, Jun Sun Singapore Management University DOI Media Attached | ||
18:50 20mTalk | Exposing Previously Undetectable Faults in Deep Neural Networks Technical Papers Isaac Dunn University of Oxford, Hadrien Pouget University of Oxford, Daniel Kroening Amazon, Tom Melham University of Oxford DOI Pre-print Media Attached | ||
19:10 20mTalk | DeepCrime: Mutation Testing of Deep Learning Systems Based on Real Faults Technical Papers DOI |