ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Tue 29 Oct 2024 11:50 - 12:00 at Magnoila - SE for AI 1 Chair(s): Chengcheng Wan

Modern large language models (LLMs), such as ChatGPT, have demonstrated impressive capabilities for coding tasks, including writing and reasoning about code. They improve upon previous neural network models of code, such as code2seq or seq2seq, that already demonstrated competitive results when performing tasks such as code summarization and identifying code vulnerabilities. However, these previous code models were shown vulnerable to adversarial examples, i.e., small syntactic perturbations designed to “fool” the models. In this paper, we first aim to study the transferability of adversarial examples, generated through white-box attacks on smaller code models, to LLMs. We also propose a new attack using an LLM to generate the perturbations. Further, we propose novel cost-effective techniques to defend LLMs against such adversaries via prompting, without incurring the cost of retraining. These prompt-based defenses involve modifying the prompt to include additional information, such as examples of adversarially perturbed code and explicit instructions for reversing adversarial perturbations. Our preliminary experiments show the effectiveness of the attacks and the proposed defenses on popular LLMs such as GPT-3.5 and GPT-4.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
SE for AI 1NIER Track / Journal-first Papers / Research Papers at Magnoila
Chair(s): Chengcheng Wan East China Normal University
10:30
15m
Talk
Evaluating Terminology Translation in Machine Translation Systems via Metamorphic Testing
Research Papers
Yihui Xu Soochow University, Yanhui Li Nanjing University, Jun Wang Nanjing University, Xiaofang Zhang Soochow University
DOI
10:45
15m
Talk
Mutual Learning-Based Framework for Enhancing Robustness of Code Models via Adversarial Training
Research Papers
Yangsen Wang Peking University, Yizhou Chen Peking University, Yifan Zhao Peking University, Zhihao Gong Peking University, Junjie Chen Tianjin University, Dan Hao Peking University
DOI Pre-print
11:00
15m
Talk
Supporting Safety Analysis of Image-processing DNNs through Clustering-based Approaches
Journal-first Papers
Mohammed Attaoui University of Luxembourg, Fabrizio Pastore University of Luxembourg, Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland
11:15
15m
Talk
Challenges and Practices of Deep Learning Model Reengineering: A Case Study on Computer Vision
Journal-first Papers
Wenxin Jiang Purdue University, Vishnu Banna Purdue University, Naveen Vivek Purdue University, Abhinav Goel Purdue University, Nicholas Synovic Loyola University Chicago, George K. Thiruvathukal Loyola University Chicago, James C. Davis Purdue University
Link to publication DOI Media Attached File Attached
11:30
10m
Talk
A Conceptual Framework for Quality Assurance of LLM-based Socio-critical Systems
NIER Track
Luciano Baresi Politecnico di Milano, Matteo Camilli Politecnico di Milano, Tommaso Dolci Politecnico di Milano, Giovanni Quattrocchi Politecnico di Milano
11:40
10m
Talk
Towards Robust ML-enabled Software Systems: Detecting Out-of-Distribution data using Gini Coefficients
NIER Track
Hala Abdelkader Applied Artificial Intelligence Institute, Deakin University, Jean-Guy Schneider Monash University, Mohamed Abdelrazek Deakin University, Australia, Priya Rani RMIT University, Rajesh Vasa Deakin University, Australia
11:50
10m
Talk
Attacks and Defenses for Large Language Models on Coding Tasks
NIER Track
Chi Zhang , Zifan Wang Center for AI Safety, Ruoshi Zhao Independent Researcher, Ravi Mangal Colorado State University, Matt Fredrikson Carnegie Mellon University, Limin Jia , Corina S. Păsăreanu Carnegie Mellon University; NASA Ames