CAIN 2025
Sun 27 - Mon 28 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025
Mon 28 Apr 2025 10:12 - 10:15 at 208 - Lightning talks Chair(s): Scott Barnett

Large language models (LLMs) have shown impressive capabilities in coding tasks, including code understanding and generation. However, these models are also susceptible to adversarial perturbations, such as case changes or whitespace modifications, which can severely affect their performance. This study investigates the impact of adversarial perturbations with preserved semantics on the performance of coding tasks with GPT3.5 and GPT4o models. In addition to evaluating individual perturbations, the research examines combined perturbation attacks, where multiple perturbations from different categories are applied together. While combined attacks showed only marginal overall improvement over individual ones, they demonstrated a synergistic effect in specific scenarios, exploiting complementary vulnerabilities in the models.

Mon 28 Apr

Displayed time zone: Eastern Time (US & Canada) change

10:00 - 10:30
Lightning talksPosters at 208
Chair(s): Scott Barnett Deakin University, Australia
10:00
3m
Poster
All You Need is an AI Platform: A Proposal for a Complete Reference Architecture
Posters
Benjamin Weigell University of Augsburg, Fabian Stieler University of Augsburg, Bernhard Bauer University of Augsburg
10:03
3m
Poster
Evaluating Reinforcement Learning Safety and Trustworthiness in Cyber-Physical Systems
Posters
Katherine R. Dearstyne University of Notre Dame, Pedro Alarcon Granadeno University of Notre Dame, Theodore Chambers University of Notre Dame, Jane Cleland-Huang University of Notre Dame
10:06
3m
Poster
Finding Trojan Triggers in Code LLMs: An Occlusion-based Human-in-the-loop Approach
Posters
Aftab Hussain Texas A&M University, College Station, Rafiqul Rabin UL Research Institutes, Toufique Ahmed IBM Research, Amin Alipour University of Houston, Bowen Xu North Carolina State University, Stephen Huang University of Houston
Pre-print
10:09
3m
Poster
Navigating the Shift: Architectural Transformations and Emerging Verification Demands in AI-Enabled Cyber-Physical Systems
Posters
Hadiza Yusuf University of Michigan - Dearborn, Khouloud Gaaloul University of Michigan - Dearborn
10:12
3m
Poster
Random Perturbation Attacks on LLMs for Code Generation
Posters
Qiulu Peng Carnegie Mellon University, Chi Zhang , Ravi Mangal Colorado State University, Corina S. Păsăreanu Carnegie Mellon University; NASA Ames, Limin Jia Carnegie Mellon University
10:15
3m
Poster
Safeguarding LLM-Applications: Specify or Train?
Posters
Hala Abdelkader Applied Artificial Intelligence Institute, Deakin University, Mohamed Abdelrazek Deakin University, Australia, Sankhya Singh Deakin University, Irini Logothetis Applied Artificial Intelligence Institute, Deakin University, Priya Rani RMIT University, Rajesh Vasa Deakin University, Australia, Jean-Guy Schneider Monash University
10:18
3m
Poster
Task decomposition and RAG as Design Patterns for LLM-based Systems
Posters
:
:
:
: