ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

This program is tentative and subject to change.

Wed 15 Apr 2026 11:15 - 11:30 at Asia I - AI for Software Engineering 1

While automatic code generation using Large Language Models (LLMs) has advanced significantly, these models frequently produce code containing security vulnerabilities. Existing approaches to improve the security of automatically generated code, such as fine-tuning or prompt engineering, have shown limited success and provide minimal insight into the underlying mechanisms causing these vulnerabilities. We propose an approach grounded in mechanistic interpretability to analyze and mitigate vulnerable code generation in LLMs. We begin by examining the knowledge stored inside LLMs, identifying and disentangling knowledge representations that contribute to generating vulnerable code. Next, we leverage these insights to repair model execution in real time: when the model attempts to access vulnerability-inducing representations during inference, our method intercepts and modifies this access, improving the security of the generated code.

We implement our methodology in a tool called \texttt{thea} and evaluate it on the CyberSecEval benchmark using Llama 3.1. Our results show that \texttt{thea} effectively improves the security of the generated code, achieving an overall improvement of around 15% in 30 different types of vulnerabilities. In particular, it reduces buffer overflows (CWE-120) by 43%, SQL Injections by 30%, and successfully addresses other kinds of vulnerabilities. Our analysis further reveals that in cases where vulnerability reduction is less substantial (such as an 11% reduction for CWE-338), the insights behind \texttt{thea} can be leveraged to reliably detect the occurrence of a vulnerability, enabling us to provide appropriate warnings to users when complete remediation is not possible. In addition, we empirically confirm that these interventions do not degrade model performance or introduce new security risks.

Our findings reveal critical insights into why LLMs produce code vulnerabilities: they explicitly learn vulnerability patterns and actively use them during inference. We repair the LLM executions to avoid such vulnerability patterns.

This program is tentative and subject to change.

Wed 15 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
AI for Software Engineering 1Research Track / SE In Practice (SEIP) at Asia I
11:00
15m
Talk
CREME: Robustness Enhancement of Code LLMs via Layer-Aware Model Editing
Research Track
Shuhan Liu Zhejiang University, Xing Hu Zhejiang University, Kerui Huang , Xiaohu Yang Zhejiang University, David Lo Singapore Management University, Xin Xia Zhejiang University
11:15
15m
Talk
Repairing LLM Executions for Secure Automatic Programming
Research Track
Ali El Husseini National University of Singapore, Yacine Izza National University of Singapore, Blaise Genest IPAL - CNRS - CNRS@CREATE, Abhik Roychoudhury National University of Singapore
11:30
15m
Talk
SecureReviewer: Enhancing Large Language Models for Secure Code Review through Secure-Aware Fine-Tuning
Research Track
Fang Liu Beihang University, Simiao Liu Beihang University, Yinghao Zhu Beihang University, Xiaoli Lian Beihang University, China, Li Zhang Beihang University
Pre-print
11:45
15m
Talk
Find My Code Twin: Improving SNIPPET SEARCH Performance Using LLMs in Practice
SE In Practice (SEIP)
Seokjun Ko Samsung Electronics Co., Eunbi Jang AI Center, Samsung Electronics, Dahyeon Choi AI Center, Samsung Electronics, daeha ryu Innovation Center, Samsung Electronics, jinyoung park Innovation Center, Samsung Electronics, changseo park Innovation Center, Samsung Electronics
12:00
15m
Talk
Fixing Security Vulnerabilities with Agentic AI in OSS-Fuzz
SE In Practice (SEIP)
Yuntong Zhang National University of Singapore, Jiawei Wang University of Southern California, Dominic Berzin National University of Singapore, Martin Mirchev SonarSource, Abhik Roychoudhury National University of Singapore
12:15
15m
Talk
EvoC2Rust: A Skeleton-guided Framework for Project-Level C-to-Rust Translation
SE In Practice (SEIP)
Chaofan Wang Shanghai Jiao Tong University, Tingrui Yu Shanghai Jiao Tong University, Chen Xie Shanghai Jiao Tong University, Jie Wang Huawei Technologies Co., Ltd, Dong Chen Huawei Technologies Co., Ltd, Wenrui Zhang Huawei Technologies Co., Ltd, Yuling Shi Shanghai Jiao Tong University, Xiaodong Gu Shanghai Jiao Tong University, Beijun Shen Shanghai Jiao Tong University