ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

This program is tentative and subject to change.

Wed 15 Apr 2026 15:00 - 15:15 at Asia IV - AI for Software Engineering 5 Chair(s): Jan Bosch

Static analysis tools (SATs) are widely adopted in both academia and industry for improving software quality, yet their practical use is often hindered by high false positive rates, especially in large-scale enterprise systems. These false alarms demand substantial manual inspection, creating severe inefficiencies in industrial code review. While recent work has demonstrated the potential of large language models (LLMs) for false alarm reduction on open-source benchmarks, their effectiveness in real-world enterprise settings remains unclear. To bridge this gap, we conduct the first comprehensive empirical study of diverse LLM-based false alarm reduction techniques in an industrial context at Tencent, one of the largest IT companies in China. Using data from Tencent’s enterprise-customized SAT on its large-scale Advertising and Marketing Services software, we construct a dataset of 433 alarms (328 false positives, 105 true positives) covering three common bug types. Through interviewing developers and analyzing the data, our results highlight the prevalence of false positives, which wastes substantial manual effort (e.g., 10 - 20 minutes of manual inspection per alarm). Meanwhile, our results show the huge potential of LLMs for reducing false alarms in industrial settings (e.g., hybrid techniques of LLM and static analysis eliminate 94–98% of false positives with high recall). Furthermore, LLM-based techniques are cost-effective, with per-alarm costs as low as 2.1–109.5 seconds and $0.0011–$0.12, representing orders-of-magnitude savings compared to manual review. Finally, our case analysis further identifies key limitations of LLM-based false alarm reduction in industrial settings.

This program is tentative and subject to change.

Wed 15 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

14:00 - 15:30
AI for Software Engineering 5Research Track / SE In Practice (SEIP) at Asia IV
Chair(s): Jan Bosch Chalmers University of Technology
14:00
15m
Talk
SpecGuru: Hierarchical LLM-Driven API Points-to Specification Generation with Self-Validation
Research Track
Shuangxiang Kan UNSW, Yuekang Li UNSW, Xiao Cheng Macquarie University, Yulei Sui University of New South Wales
14:15
15m
Talk
Panoptes: A Profile Clustering Framework for Context-Aware Binary Optimization
Research Track
Edwin Kayang Arizona State University, Eric Jahns Arizona State University, Mishel Jyothis Paul Arizona State University, Michel Kinsy Arizona State University
14:30
15m
Talk
HoarePrompt: Structural Reasoning About Program Correctness in Natural LanguageAward Winner
Research Track
Dimitrios Stamatios Bouras Peking University, Yihan Dai Nankai University, Tairan Wang University College London, Yingfei Xiong Peking University, Sergey Mechtaev Peking University
14:45
15m
Talk
Large Language Model-Aided Partial Program Dependence Analysis
Research Track
Xiaokai Rong The University of Texas at Dallas, Aashish Yadavally University of Central Florida, Tien N. Nguyen University of Texas at Dallas
Pre-print
15:00
15m
Talk
Reducing False Positives in Static Bug Detection with LLMs: An Empirical Study in Industry
SE In Practice (SEIP)
Xueying Du Fudan University, Jiayi Feng Fudan University, Yi Zou Fudan University, Wei Xu Tencent, Jie Ma Tencent, Wei Zhang Tencent, Sisi Liu Tencent, Xin Peng Fudan University, Yiling Lou University of Illinois at Urbana-Champaign
15:15
15m
Talk
CASCADE: LLM-powered JavaScript Deobfuscator at Google
SE In Practice (SEIP)
Shan Jiang UT Austin, Pranoy Kovuri Google, David Tao Google, Zhixun Tan Google