Iterative Generation of Adversarial Example for Deep Code Models
SE for AIAward Winner
This program is tentative and subject to change.
Deep code models are vulnerable to adversarial attacks, making it possible for semantically identical inputs to trigger different responses. Current black-box attack methods typically prioritize the impact of identifiers on the model based on custom importance scores or program context and incrementally replace identifiers to generate adversarial examples. However, these methods often fail to fully leverage feedback from failed attacks to guide subsequent attacks, resulting in problems such as local optima bias and efficiency dilemmas. In this paper, we introduce ITGen, a novel black-box adversarial example generation method that iteratively utilizes feedback from failed attacks to refine the generation process. It employs a bitvector-based representation of code variants to mitigate local optima bias. By integrating these bit vectors with feedback from failed attacks, ITGen uses an enhanced Bayesian optimization framework to efficiently predict the most promising code variants, significantly reducing the search space and thus addressing the efficiency dilemma. We conducted experiments on a total of nine deep code models for both understanding and generation tasks, demonstrating ITGen’s effectiveness and efficiency, as well as its ability to enhance model robustness through adversarial fine-tuning. For example, on average, ITGen improves the attack success rate by 47.98% and 69.70% over the state-of-the-art techniques (i.e., ALERT and BeamAttack), respectively.
This program is tentative and subject to change.
Fri 2 MayDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | |||
11:00 15mTalk | A Tale of Two DL Cities: When Library Tests Meet CompilerSE for AI Research Track Qingchao Shen Tianjin University, Yongqiang Tian Hong Kong University of Science and Technology, Haoyang Ma Hong Kong University of Science and Technology, Junjie Chen Tianjin University, Lili Huang College of Intelligence and Computing, Tianjin University, Ruifeng Fu Tianjin University, Shing-Chi Cheung Hong Kong University of Science and Technology, Zan Wang Tianjin University | ||
11:15 15mTalk | Iterative Generation of Adversarial Example for Deep Code ModelsSE for AIAward Winner Research Track | ||
11:30 15mTalk | On the Mistaken Assumption of Interchangeable Deep Reinforcement Learning ImplementationsSE for AI Research Track Rajdeep Singh Hundal National University of Singapore, Yan Xiao Sun Yat-sen University, Xiaochun Cao Sun Yat-Sen University, Jin Song Dong National University of Singapore, Manuel Rigger National University of Singapore | ||
11:45 15mTalk | µPRL: a Mutation Testing Pipeline for Deep Reinforcement Learning based on Real FaultsSE for AI Research Track Deepak-George Thomas Tulane University, Matteo Biagiola Università della Svizzera italiana, Nargiz Humbatova Università della Svizzera italiana, Mohammad Wardat Oakland University, USA, Gunel Jahangirova King's College London, Hridesh Rajan Tulane University, Paolo Tonella USI Lugano | ||
12:00 15mTalk | Testing and Understanding Deviation Behaviors in FHE-hardened Machine Learning ModelsSE for AI Research Track Yiteng Peng Hong Kong University of Science and Technology, Daoyuan Wu The Hong Kong University of Science and Technology, Zhibo Liu The Hong Kong University of Science and Technology, Dongwei Xiao Hong Kong University of Science and Technology, Zhenlan Ji The Hong Kong University of Science and Technology, Juergen Rahmel HSBC, Shuai Wang Hong Kong University of Science and Technology | ||
12:15 15mTalk | TraceFL: Interpretability-Driven Debugging in Federated Learning via Neuron ProvenanceSE for AI Research Track |