Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

Deep learning models are increasingly used in mobile applications as critical components. Unlike the program bytecode whose vulnerabilities and threats have been widely-discussed, whether and how the deep learning models deployed in the applications can be compromised are not well-understood since neural networks are usually viewed as a black box. In this paper, we introduce a highly practical backdoor attack achieved with a set of reverse-engineering techniques over compiled deep learning models. The core of the attack is a neural conditional branch constructed with a trigger detector and several operators and injected into the victim model as a malicious payload. The attack is effective as the conditional logic can be flexibly customized by the attacker, and scalable as it does not require any prior knowledge from the original model. We evaluated the attack effectiveness using 5 state-of-the-art deep learning models and real-world samples collected from 30 users. The results demonstrated that the injected backdoor can be triggered with a success rate of 93.5%, while only brought less than 2ms latency overhead and no more than 1.5% accuracy decrease. We further conducted an empirical study on real-world mobile deep learning apps collected from Google Play. We found 54 apps that were vulnerable to our attack, including popular and security-critical ones.

Wed 26 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

12:55 - 13:55
2.2.5. Deep Neural Networks: HackingSEIP - Software Engineering in Practice / Technical Track at Blended Sessions Room 5 +12h
Chair(s): Grace Lewis Carnegie Mellon Software Engineering Institute
12:55
20m
Paper
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android AppsSEIP
SEIP - Software Engineering in Practice
Yujin Huang Faculty of Information Technology, Monash University, Han Hu Faculty of Information Technology, Monash University, Chunyang Chen Monash University
Pre-print Media Attached
13:15
20m
Paper
DeepBackdoor: Black-box Backdoor Attack on Deep Learning Models through Neural Payload InjectionTechnical Track
Technical Track
Yuanchun Li Microsoft Research, Jiayi Hua Beijing University of Posts and Telecommunications, Haoyu Wang Beijing University of Posts and Telecommunications, Chunyang Chen Monash University, Yunxin Liu Microsoft Research
Pre-print Media Attached
13:35
20m
Paper
Reducing DNN Properties to Enable Falsification with Adversarial AttacksArtifact ReusableTechnical TrackArtifact Available
Technical Track
David Shriver University of Virginia, Sebastian Elbaum University of Virginia, Matthew B Dwyer University of Virginia
Link to publication DOI Pre-print Media Attached

Thu 27 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

00:55 - 01:55
00:55
20m
Paper
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android AppsSEIP
SEIP - Software Engineering in Practice
Yujin Huang Faculty of Information Technology, Monash University, Han Hu Faculty of Information Technology, Monash University, Chunyang Chen Monash University
Pre-print Media Attached
01:15
20m
Paper
DeepBackdoor: Black-box Backdoor Attack on Deep Learning Models through Neural Payload InjectionTechnical Track
Technical Track
Yuanchun Li Microsoft Research, Jiayi Hua Beijing University of Posts and Telecommunications, Haoyu Wang Beijing University of Posts and Telecommunications, Chunyang Chen Monash University, Yunxin Liu Microsoft Research
Pre-print Media Attached
01:35
20m
Paper
Reducing DNN Properties to Enable Falsification with Adversarial AttacksArtifact ReusableTechnical TrackArtifact Available
Technical Track
David Shriver University of Virginia, Sebastian Elbaum University of Virginia, Matthew B Dwyer University of Virginia
Link to publication DOI Pre-print Media Attached