ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Wed 17 Apr 2024 11:30 - 11:45 at Grande Auditório - AI & Security 1 Chair(s): Tevfik Bultan

The long-lasting Android malware threat has attracted significant research efforts in malware detection. In particular, by modeling malware detection as a classification problem, machine learning based approaches, especially deep neural network (DNN) based approaches, are increasingly being used for Android malware detection and have achieved significant improvements over other detection approaches such as signature-based approaches. However, as Android malware evolve rapidly and the presence of adversarial samples, DNN models trained on early constructed samples often yield poor decisions when used to detect newly emerging samples. Fundamentally, this phenomenon can be summarized as the uncertainly in the data (noise or randomness) and the weakness in the training process (insufficient training data). Overlooking these uncertainties poses risks in the model predictions. In this paper, we take the first step to estimate the prediction uncertainty of DNN models in malware detection and leverage these estimates to enhance Android malware detection techniques. Specifically, besides training a DNN model to predict malware, we employ several uncertainty estimation methods to train a Correction Model that determines whether a sample is correctly or incorrectly predicted by the DNN model. We then leverage the estimated uncertainty output by the Correction Model to correct the prediction results of the DNN model, improving the accuracy of the DNN model. Experimental results show that our proposed MalCertain effectively improves the accuracy of the underlying DNN models for Android malware detection by around 21% and significantly improves the detection effectiveness of adversarial Android malware samples by up to 94.38%. Our research sheds light on the promising direction that leverages prediction uncertainty to improve prediction-based software engineering tasks.

Wed 17 Apr

Displayed time zone: Lisbon change

11:00 - 12:30
AI & Security 1Research Track / Journal-first Papers at Grande Auditório
Chair(s): Tevfik Bultan University of California at Santa Barbara
11:00
15m
Talk
Towards More Practical Automation of Vulnerability Assessment
Research Track
Shengyi Pan Zhejiang University, Lingfeng Bao Zhejiang University, Jiayuan Zhou Huawei, Xing Hu Zhejiang University, Xin Xia Huawei Technologies, Shanping Li Zhejiang University
11:15
15m
Talk
VGX: Large-Scale Sample Generation for Boosting Learning-Based Software Vulnerability Analyses
Research Track
Yu Nong Washington State University, Richard Fang Washington State University, Guangbei Yi Washington State University, Kunsong Zhao The Hong Kong Polytechnic University, Xiapu Luo The Hong Kong Polytechnic University, Feng Chen University of Texas at Dallas, Haipeng Cai Washington State University
11:30
15m
Talk
MalCertain: Enhancing Deep Neural Network Based Android Malware Detection by Tackling Prediction Uncertainty
Research Track
haodong li Beijing University of Posts and Telecommunications, Guosheng Xu Beijing University of Posts and Telecommunications, Liu Wang Beijing University of Posts and Telecommunications, Xusheng Xiao Arizona State University, Xiapu Luo The Hong Kong Polytechnic University, Guoai Xu Harbin Institute of Technology, Shenzhen, Haoyu Wang Huazhong University of Science and Technology
11:45
15m
Talk
Pre-training by Predicting Program Dependencies for Vulnerability Analysis Tasks
Research Track
Zhongxin Liu Zhejiang University, Zhijie Tang Zhejiang University, Junwei Zhang Zhejiang University, Xin Xia Huawei Technologies, Xiaohu Yang Zhejiang University
12:00
15m
Talk
Investigating White-Box Attacks for On-Device Models
Research Track
Mingyi Zhou Monash University, Xiang Gao Beihang University, Jing Wu Monash University, Kui Liu Huawei, Hailong Sun Beihang University, Li Li Beihang University
12:15
7m
Talk
VulExplainer: A Transformer-Based Hierarchical Distillation for Explaining Vulnerability Types
Journal-first Papers
Michael Fu Monash University, Van Nguyen Monash University, Kla Tantithamthavorn Monash University, Trung Le Monash University, Australia, Dinh Phung Monash University, Australia
Link to publication DOI
12:22
7m
Talk
SIEGE: A Semantics-Guided Safety Enhancement Framework for AI-enabled Cyber-Physical Systems
Journal-first Papers
Jiayang Song University of Alberta, Xuan Xie University of Alberta, Lei Ma The University of Tokyo & University of Alberta
DOI