ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Wed 17 Apr 2024 12:00 - 12:15 at Grande Auditório - AI & Security 1 Chair(s): Tevfik Bultan

Numerous mobile apps have leveraged deep learning capabilities. However, on-device models are vulnerable to attacks as they can be easily extracted from their corresponding mobile apps. Although the structure and parameters information of these models can be accessed, existing on-device attacking approaches only generate black-box attacks (i.e., indirect white-box attacks), which are far less effective and efficient than white-box strategies. This is because mobile deep learning (DL) frameworks like TensorFlow Lite (TFLite) do not support gradient computing (referred to as non-debuggable models), which is necessary for white-box attacking algorithms. Thus, we argue that existing findings may underestimate the harm- fulness of on-device attacks. To this end, we conduct a study to answer this research question: Can on-device models be directly attacked via white-box strategies? We first systematically analyze the difficulties of transforming the on-device model to its debuggable version, and propose a Reverse Engineering framework for On-device Models (REOM), which automatically reverses the compiled on-device TFLite model to the debuggable model, enabling attackers to launch direct white-box attacks. Specifically, REOM first transforms compiled on-device models into Open Neural Network Exchange (ONNX) format, then removes the non-debuggable parts, and converts them to the debuggable DL mod- els format that allows attackers to exploit in a white-box setting. Our experimental results show that our approach is effective in achieving automated transformation (i.e., 92.6%) among 244 TFLite models. Compared with previous attacks using surrogate models, REOM enables attackers to achieve higher attack success rates (10.23%→89.03%) with a hundred times smaller attack perturbations (1.0→0.01). In addition, because the ONNX platform has plenty of tools for model format exchanging, the proposed method based on the ONNX platform can be adapted to other model formats. Our findings emphasize the need for developers to carefully consider their model deployment strategies, and use white-box methods to evaluate the vulnerability of on-device models. Our codes 1 are anonymously shared.

Wed 17 Apr

Displayed time zone: Lisbon change

11:00 - 12:30
AI & Security 1Research Track / Journal-first Papers at Grande Auditório
Chair(s): Tevfik Bultan University of California at Santa Barbara
11:00
15m
Talk
Towards More Practical Automation of Vulnerability Assessment
Research Track
Shengyi Pan Zhejiang University, Lingfeng Bao Zhejiang University, Jiayuan Zhou Huawei, Xing Hu Zhejiang University, Xin Xia Huawei Technologies, Shanping Li Zhejiang University
11:15
15m
Talk
VGX: Large-Scale Sample Generation for Boosting Learning-Based Software Vulnerability Analyses
Research Track
Yu Nong Washington State University, Richard Fang Washington State University, Guangbei Yi Washington State University, Kunsong Zhao The Hong Kong Polytechnic University, Xiapu Luo The Hong Kong Polytechnic University, Feng Chen University of Texas at Dallas, Haipeng Cai Washington State University
11:30
15m
Talk
MalCertain: Enhancing Deep Neural Network Based Android Malware Detection by Tackling Prediction Uncertainty
Research Track
haodong li Beijing University of Posts and Telecommunications, Guosheng Xu Beijing University of Posts and Telecommunications, Liu Wang Beijing University of Posts and Telecommunications, Xusheng Xiao Arizona State University, Xiapu Luo The Hong Kong Polytechnic University, Guoai Xu Harbin Institute of Technology, Shenzhen, Haoyu Wang Huazhong University of Science and Technology
11:45
15m
Talk
Pre-training by Predicting Program Dependencies for Vulnerability Analysis Tasks
Research Track
Zhongxin Liu Zhejiang University, Zhijie Tang Zhejiang University, Junwei Zhang Zhejiang University, Xin Xia Huawei Technologies, Xiaohu Yang Zhejiang University
12:00
15m
Talk
Investigating White-Box Attacks for On-Device Models
Research Track
Mingyi Zhou Monash University, Xiang Gao Beihang University, Jing Wu Monash University, Kui Liu Huawei, Hailong Sun Beihang University, Li Li Beihang University
12:15
7m
Talk
VulExplainer: A Transformer-Based Hierarchical Distillation for Explaining Vulnerability Types
Journal-first Papers
Michael Fu Monash University, Van Nguyen Monash University, Kla Tantithamthavorn Monash University, Trung Le Monash University, Australia, Dinh Phung Monash University, Australia
Link to publication DOI
12:22
7m
Talk
SIEGE: A Semantics-Guided Safety Enhancement Framework for AI-enabled Cyber-Physical Systems
Journal-first Papers
Jiayang Song University of Alberta, Xuan Xie University of Alberta, Lei Ma The University of Tokyo & University of Alberta
DOI