Write a Blog >>
ISSTA 2021
Sun 11 - Sat 17 July 2021 Online
co-located with ECOOP and ISSTA 2021

As a new programming paradigm, deep learning has expanded its application to many real-world problems. At the same time, deep learning based software are found to be vulnerable to adversarial attacks. Though various defense mechanisms have been proposed to improve robustness of deep learning software, many of them are ineffective against adaptive attacks. In this work, we propose a novel characterization to distinguish adversarial examples from benign ones based on the observation that adversarial examples are significantly less robust than benign ones. As existing robustness measurement does not scale to large networks, we propose a novel defense framework, named attack as defense (A2D), to detect adversarial examples by effectively evaluating an example’s robustness. A2D uses the cost of attacking an input for robustness evaluation and identifies those less robust examples as adversarial since less robust examples are easier to attack. Extensive experiment results on MNIST, CIFAR10 and ImageNet show that A2D is more effective than recent promising approaches. We also evaluate our defense against potential adaptive attacks and show that A2D is effective in defending carefully designed adaptive attacks, e.g., the attack success rate drops to 0% on CIFAR10.

Wed 14 Jul

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

18:30 - 19:30
Session 2 (time band 1) Testing Deep Learning Systems 1Technical Papers at ISSTA 2
Chair(s): Lin Tan Purdue University
18:30
20m
Talk
Attack as Defense: Characterizing Adversarial Examples using Robustness
Technical Papers
Zhe Zhao ShanghaiTech University, Guangke Chen ShanghaiTech University, Jingyi Wang Zhejiang University, Yiwei Yang ShanghaiTech University, Fu Song ShanghaiTech University, Jun Sun Singapore Management University
DOI Media Attached
18:50
20m
Talk
Exposing Previously Undetectable Faults in Deep Neural Networks
Technical Papers
Isaac Dunn University of Oxford, Hadrien Pouget University of Oxford, Daniel Kroening Amazon, Tom Melham University of Oxford
DOI Pre-print Media Attached
19:10
20m
Talk
DeepCrime: Mutation Testing of Deep Learning Systems Based on Real Faults
Technical Papers
Nargiz Humbatova USI Lugano, Gunel Jahangirova USI Lugano, Paolo Tonella USI Lugano
DOI

Thu 15 Jul

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

09:10 - 10:50
Session 9 (time band 3) Testing Deep Learning Systems 3Technical Papers at ISSTA 1
Chair(s): Mauro Pezze USI Lugano; Schaffhausen Institute of Technology
09:10
20m
Talk
Attack as Defense: Characterizing Adversarial Examples using Robustness
Technical Papers
Zhe Zhao ShanghaiTech University, Guangke Chen ShanghaiTech University, Jingyi Wang Zhejiang University, Yiwei Yang ShanghaiTech University, Fu Song ShanghaiTech University, Jun Sun Singapore Management University
DOI Media Attached
09:30
20m
Talk
Exposing Previously Undetectable Faults in Deep Neural Networks
Technical Papers
Isaac Dunn University of Oxford, Hadrien Pouget University of Oxford, Daniel Kroening Amazon, Tom Melham University of Oxford
DOI Pre-print Media Attached
09:50
20m
Talk
Automatic Test Suite Generation for Key-Points Detection DNNs using Many-Objective Search (Experience Paper)
Technical Papers
Fitash Ul Haq University of Luxembourg, Donghwan Shin University of Luxembourg, Lionel Briand University of Luxembourg; University of Ottawa, Thomas Stifter IEE, Jun Wang Post Luxembourg
DOI
10:10
20m
Talk
DeepHyperion: Exploring the Feature Space of Deep Learning-Based Systems through Illumination Search
Technical Papers
Tahereh Zohdinasab USI Lugano, Vincenzo Riccio USI Lugano, Alessio Gambi University of Passau, Paolo Tonella USI Lugano
DOI File Attached
10:30
20m
Talk
DeepCrime: Mutation Testing of Deep Learning Systems Based on Real Faults
Technical Papers
Nargiz Humbatova USI Lugano, Gunel Jahangirova USI Lugano, Paolo Tonella USI Lugano
DOI