ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Tue 29 Oct 2024 10:45 - 11:00 at Magnoila - SE for AI 1 Chair(s): Chengcheng Wan

Deep code models (DCMs) have achieved impressive accomplishments and have been widely applied to various code-related tasks. However, existing studies show that some DCMs have poor robustness, and even small noise in the input data can lead to erroneous outputs. This phenomenon can seriously hinder the application of these DCMs in real-world scenarios. To address this limitation, we propose MARVEL, a mutual learning-based framework for enhancing the robustness of DCMs via adversarial training. Specifically, MARVEL initializes two identical DCMs, one of which receives Gaussian-distorted data and performs adversarial training, and the other receives the clean data. Then these two DCMs work together to not only fit the true labels but also fit each other’s internal parameters. Our intuition is that the DCM can enhance robustness by training noisy data, while the DCM achieves accurate prediction performance by learn the clean data. Their mutual learning enables the DCM to balance both robustness and predictive performance.

We selected three popular DCMs, five open-source datasets, and three state-of-the-art attack methods to evaluate the performance of MARVEL on 45 (3×5×3) downstream tasks composed of their combinations. Additionally, we set two of the state-of-the-art robustness enhancement techniques as baselines. The experimental results show that MARVEL significantly enhances the robustness of DCMs across all 45 tasks. In 43 out of 45 tasks, MARVEL outperforms the two baselines with an average improvement of 15.19% and 31.80%, respectively. At the same time, MARVEL can maintain the inherent accuracy with an error margin within +-2.43% compared to the original DCMs.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
SE for AI 1NIER Track / Journal-first Papers / Research Papers at Magnoila
Chair(s): Chengcheng Wan East China Normal University
10:30
15m
Talk
Evaluating Terminology Translation in Machine Translation Systems via Metamorphic Testing
Research Papers
Yihui Xu Soochow University, Yanhui Li Nanjing University, Jun Wang Nanjing University, Xiaofang Zhang Soochow University
DOI
10:45
15m
Talk
Mutual Learning-Based Framework for Enhancing Robustness of Code Models via Adversarial Training
Research Papers
Yangsen Wang Peking University, Yizhou Chen Peking University, Yifan Zhao Peking University, Zhihao Gong Peking University, Junjie Chen Tianjin University, Dan Hao Peking University
DOI Pre-print
11:00
15m
Talk
Supporting Safety Analysis of Image-processing DNNs through Clustering-based Approaches
Journal-first Papers
Mohammed Attaoui University of Luxembourg, Fabrizio Pastore University of Luxembourg, Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland
11:15
15m
Talk
Challenges and Practices of Deep Learning Model Reengineering: A Case Study on Computer Vision
Journal-first Papers
Wenxin Jiang Purdue University, Vishnu Banna Purdue University, Naveen Vivek Purdue University, Abhinav Goel Purdue University, Nicholas Synovic Loyola University Chicago, George K. Thiruvathukal Loyola University Chicago, James C. Davis Purdue University
Link to publication DOI Media Attached File Attached
11:30
10m
Talk
A Conceptual Framework for Quality Assurance of LLM-based Socio-critical Systems
NIER Track
Luciano Baresi Politecnico di Milano, Matteo Camilli Politecnico di Milano, Tommaso Dolci Politecnico di Milano, Giovanni Quattrocchi Politecnico di Milano
11:40
10m
Talk
Towards Robust ML-enabled Software Systems: Detecting Out-of-Distribution data using Gini Coefficients
NIER Track
Hala Abdelkader Applied Artificial Intelligence Institute, Deakin University, Jean-Guy Schneider Monash University, Mohamed Abdelrazek Deakin University, Australia, Priya Rani RMIT University, Rajesh Vasa Deakin University, Australia
11:50
10m
Talk
Attacks and Defenses for Large Language Models on Coding Tasks
NIER Track
Chi Zhang , Zifan Wang Center for AI Safety, Ruoshi Zhao Independent Researcher, Ravi Mangal Colorado State University, Matt Fredrikson Carnegie Mellon University, Limin Jia , Corina S. Păsăreanu Carnegie Mellon University; NASA Ames