DeepFault: Fault Localization For Deep Neural Networks
Deep Neural Networks (DNNs) are increasingly deployed in safety-critical applications including autonomous vehicles and medical diagnostics. To reduce the residual risk for unexpected DNN behaviour and provide evidence for their trustworthy operation, DNNs should be thoroughly tested. The DeepFault white box DNN testing approach presented in our paper addresses this challenge by employing suspiciousness measures inspired by fault localization to establish the hit spectrum of neurons and identify suspicious neurons whose weights have not been calibrated correctly and thus are considered responsible for inadequate DNN performance. DeepFault also uses a suspiciousness-guided algorithm to synthesize new inputs, from correctly classified inputs, that increase the activation values of suspicious neurons. Our empirical evaluation on several DNN instances trained on MNIST and CIFAR-10 datasets shows that DeepFault is effective in identifying suspicious neurons. Also, the inputs synthesized by DeepFault closely resemble the original inputs, exercise the identified suspicious neurons and are highly adversarial.
Wed 10 AprDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
14:00 - 16:00 | |||
14:00 30mTalk | DeepFault: Fault Localization For Deep Neural Networks FASE Link to publication | ||
14:30 30mTalk | Variability Abstraction and Refinement for Game-based Lifted Model Checking of full CTL FASE Aleksandar S. Dimovski Mother Teresa University, Skopje, Axel Legay INRIA Rennes, Andrzej Wąsowski IT University of Copenhagen, Denmark Link to publication | ||
15:00 30mTalk | Formal Verification of Safety & Security Related Timing Constraints for A Cooperative Automotive System FASE Link to publication | ||
15:30 30mTalk | Checking Observational Purity Of Procedures FASE Himanshu Arora , Raghavan Komondoor Indian Institute of Science, Bangalore, G. Ramalingam Microsoft Research Link to publication |