Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

Deep learning has shown its power in many applications, including object detection in images, natural-language understanding, and speech recognition. To make it more accessible to end users, many deep learning models are now embedded in mobile apps. Compared to offloading deep learning from smartphones to the cloud, performing machine learning on-device can help improve latency, connectivity, and power consumption. However, most deep learning models within Android apps can easily be obtained via mature reverse engineering, while the models’ exposure may invite adversarial attacks. In this study, we propose a simple but effective approach to hacking deep learning models using adversarial attacks by identifying highly similar pre-trained models from TensorFlow Hub. All 10 real-world Android apps in the experiment are successfully attacked by our approach. Apart from the feasibility of the model attack, we also carry out an empirical study that investigates the characteristics of deep learning models used by hundreds of Android apps on Google Play. The results show that many of them are similar to each other and widely use fine-tuning techniques to pre-trained models on the Internet.

Wed 26 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

12:55 - 13:55
2.2.5. Deep Neural Networks: HackingSEIP - Software Engineering in Practice / Technical Track at Blended Sessions Room 5 +12h
Chair(s): Grace Lewis Carnegie Mellon Software Engineering Institute
12:55
20m
Paper
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android AppsSEIP
SEIP - Software Engineering in Practice
Yujin Huang Faculty of Information Technology, Monash University, Han Hu Faculty of Information Technology, Monash University, Chunyang Chen Monash University
Pre-print Media Attached
13:15
20m
Paper
DeepBackdoor: Black-box Backdoor Attack on Deep Learning Models through Neural Payload InjectionTechnical Track
Technical Track
Yuanchun Li Microsoft Research, Jiayi Hua Beijing University of Posts and Telecommunications, Haoyu Wang Beijing University of Posts and Telecommunications, Chunyang Chen Monash University, Yunxin Liu Microsoft Research
Pre-print Media Attached
13:35
20m
Paper
Reducing DNN Properties to Enable Falsification with Adversarial AttacksArtifact ReusableTechnical TrackArtifact Available
Technical Track
David Shriver University of Virginia, Sebastian Elbaum University of Virginia, Matthew B Dwyer University of Virginia
Link to publication DOI Pre-print Media Attached

Thu 27 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

00:55 - 01:55
00:55
20m
Paper
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android AppsSEIP
SEIP - Software Engineering in Practice
Yujin Huang Faculty of Information Technology, Monash University, Han Hu Faculty of Information Technology, Monash University, Chunyang Chen Monash University
Pre-print Media Attached
01:15
20m
Paper
DeepBackdoor: Black-box Backdoor Attack on Deep Learning Models through Neural Payload InjectionTechnical Track
Technical Track
Yuanchun Li Microsoft Research, Jiayi Hua Beijing University of Posts and Telecommunications, Haoyu Wang Beijing University of Posts and Telecommunications, Chunyang Chen Monash University, Yunxin Liu Microsoft Research
Pre-print Media Attached
01:35
20m
Paper
Reducing DNN Properties to Enable Falsification with Adversarial AttacksArtifact ReusableTechnical TrackArtifact Available
Technical Track
David Shriver University of Virginia, Sebastian Elbaum University of Virginia, Matthew B Dwyer University of Virginia
Link to publication DOI Pre-print Media Attached