Attack as Defense: Characterizing Adversarial Examples using Robustness
Thu 15 Jul 2021 09:10 - 09:30 at ISSTA 1 - Session 9 (time band 3) Testing Deep Learning Systems 3 Chair(s): Mauro Pezze
As a new programming paradigm, deep learning has expanded its application to many real-world problems. At the same time, deep learning based software are found to be vulnerable to adversarial attacks. Though various defense mechanisms have been proposed to improve robustness of deep learning software, many of them are ineffective against adaptive attacks. In this work, we propose a novel characterization to distinguish adversarial examples from benign ones based on the observation that adversarial examples are significantly less robust than benign ones. As existing robustness measurement does not scale to large networks, we propose a novel defense framework, named attack as defense (A2D), to detect adversarial examples by effectively evaluating an example’s robustness. A2D uses the cost of attacking an input for robustness evaluation and identifies those less robust examples as adversarial since less robust examples are easier to attack. Extensive experiment results on MNIST, CIFAR10 and ImageNet show that A2D is more effective than recent promising approaches. We also evaluate our defense against potential adaptive attacks and show that A2D is effective in defending carefully designed adaptive attacks, e.g., the attack success rate drops to 0% on CIFAR10.
Wed 14 JulDisplayed time zone: Brussels, Copenhagen, Madrid, Paris change
18:30 - 19:30 | Session 2 (time band 1) Testing Deep Learning Systems 1Technical Papers at ISSTA 2 Chair(s): Lin Tan Purdue University | ||
18:30 20mTalk | Attack as Defense: Characterizing Adversarial Examples using Robustness Technical Papers Zhe Zhao ShanghaiTech University, Guangke Chen ShanghaiTech University, Jingyi Wang Zhejiang University, Yiwei Yang ShanghaiTech University, Fu Song ShanghaiTech University, Jun Sun Singapore Management University DOI Media Attached | ||
18:50 20mTalk | Exposing Previously Undetectable Faults in Deep Neural Networks Technical Papers Isaac Dunn University of Oxford, Hadrien Pouget University of Oxford, Daniel Kroening Amazon, Tom Melham University of Oxford DOI Pre-print Media Attached | ||
19:10 20mTalk | DeepCrime: Mutation Testing of Deep Learning Systems Based on Real Faults Technical Papers DOI |