ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

This program is tentative and subject to change.

Thu 16 Apr 2026 11:30 - 11:45 at Asia IV - AI for Software Engineering 11

Agentic Automated Program Repair (APR) is increasingly tackling complex, repository-level bugs in industry, but ultimately agent-generated patches still need to be reviewed by a human before committing them to ensure they address the bug. Showing unlikely patches to developers can lead to substantial noise, wasting valuable developer time and eroding trust in automated code changes. We introduce two complementary LLM-based policies to reduce such noise: bug abstention and patch validation policies. Bug abstention excludes bugs that the agentic APR system is unlikely to fix. Patch validation rejects patches that are unlikely to be a good fix for the given bug. We evaluate both policies on three sets of bugs from Google’s codebase, and their candidate patches generated by an internal agentic APR system. On a set of 174 human-reported bugs, removing bugs and patch trajectories rejected by our policies can raise success rates by up to 13 percentage points and 15 percentage points, respectively, and by up to 39 percentage points in combination. On null pointer exceptions and sanitizer-reported bugs with machine-generated bug reports, patch validation also improves average single-sample success rates. This two-policy approach provides a practical path to the reliable, industrial-scale deployment of agentic APR systems.

This program is tentative and subject to change.

Thu 16 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
AI for Software Engineering 11Research Track / SE In Practice (SEIP) at Asia IV
11:00
15m
Talk
LLM-based Agents for Automated Bug Fixing: How Far Are We?
Research Track
Xiangxin Meng Bytedance, Zexiong Ma Peking University, Pengfei Gao ByteDance, Chao Peng ByteDance
11:15
15m
Talk
Depradar: Agentic Coordination for Context-Aware Defect Impact Analysis in Deep Learning Libraries
Research Track
Yi Gao Zhejiang University, Xing Hu Zhejiang University, Tongtong Xu Huawei, Jiali Zhao Huawei, Xiaohu Yang Zhejiang University, Xin Xia Zhejiang University
11:30
15m
Talk
Abstain and Validate: A Dual-LLM Policy for Reducing Noise in Agentic Program Repair
SE In Practice (SEIP)
José Pablo Cambronero Google, USA, Michele Tufano Google, Sherry Shi Google, Renyao Wei Google, Grant Uy Google, Sam Cheng Google, Chin-Jung Liu Google, Shiying Pan Google, Satish Chandra Google, Inc, Patrick Rondon Google
11:45
15m
Talk
OpenDerisk: An Industrial Framework for AI-Driven SRE, with Design, Implementation, and Case Studies
SE In Practice (SEIP)
12:00
15m
Talk
Intelligent Triage: Interpretable Incident Triage Workflow using LLM Extracted Triage Reasoning
SE In Practice (SEIP)
Jianing Liu Fudan University, Hao Ren University of Illinois Urbana-Champaign, Yu Kang Microsoft, Minghua Ma Microsoft, Fangkai Yang Microsoft Research, Yong Xu Microsoft Research, Xin Gao Microsoft 365, Meng Zhang , Hongbin Wang Microsoft, Xuedong Gao Microsoft, Qingwei Lin Microsoft, Yingnong Dang Microsoft Azure, Saravan Rajmohan Microsoft, Dongmei Zhang Microsoft, Qi Zhang Microsoft, Chetan Bansal Microsoft Research, Yangfan Zhou Fudan University
12:15
15m
Talk
How Do Semantically Equivalent Code Transformations Impact Membership Inference on LLMs for Code?
Research Track
Hua yang North Carolina State University, Alejandro Velasco William & Mary, Le-Cong Thanh The University of Melbourne, Md Nazmul Haque North Carolina State University, Bowen Xu North Carolina State University, Denys Poshyvanyk William & Mary