ICPC 2025
Sun 27 - Mon 28 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025

This program is tentative and subject to change.

Mon 28 Apr 2025 16:30 - 16:40 at 205 - Log Parsing, Bug Localisation, Review Comprehension

Bug localization, which aims to localize software faults specific to a bug report, is valuable for improving software developers’ efficiency. This task is often formulated as an information retrieval problem, where potentially buggy files are retrieved and ranked according to their textual similarity to a target bug report. To address this task, many methods have been proposed, primarily focusing on resolving the semantic gap between natural language in bug reports and programming language in source files. Recently, Large Language Models (LLMs) have demonstrated strong capabilities in seamlessly understanding both natural language and programming language, potentially addressing the semantic gap issue through natural conversation. However, the limited context length of prompt and capability of long-context comprehension in existing LLMs make it impossible to directly load the entire codebase (e.g., thousands of code files) into LLMs’ prompts to retrieve buggy code files. In this paper, we explore how to leverage existing LLMs for project-level bug localization without requiring additional fine-tuning and propose an LLM-based Bug Localization framework, LLM-BL. Our core idea is to enable LLMs to perform bug localization tasks using listwise ranking instructions and to avoid exceeding the context length limit by compressing the codebase through file filtering and content reduction. Specifically, LLM-BL consists of four modules: report expansion, candidate retrieval, content reduction, and LLM-based bug localization. Among them, the first three modules are used to retrieve potential buggy code files and extract bug-related information from the code files, respectively, while the last module enables the LLM to perform bug localization through listwise file ranking. We select three widely used LLMs (i.e., ChatGPT, Llama 3, CodeLlama) as the base LLMs in our framework and conduct extensive experiments on six public projects with two report types (i.e., Java and Python). Experimental results demonstrate that the two LLMs, ChatGPT and Llama 3, can more effectively understand task intentions and more accurately localize buggy files than CodeLlama. Compared to existing methods, LLM-BL achieves better localization performance without requiring any fine-tuning, in a plug-and-play manner. These results demonstrate that both ChatGPT and Llama 3 are effective zero-shot rankers for bug localization.

This program is tentative and subject to change.

Mon 28 Apr

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 17:30
Log Parsing, Bug Localisation, Review ComprehensionResearch Track / Early Research Achievements (ERA) at 205
16:00
10m
Talk
Developing a Taxonomy for Advanced Log Parsing Techniques
Research Track
Issam Sedki Concordia University, Wahab Hamou-Lhadj Concordia University, Montreal, Canada, Otmane Ait-Mohamed Concordia University, Naser Ezzati Jivan
16:10
10m
Talk
GELog:A GPT-Enhanced Log Representation Method for Anomaly Detection
Research Track
Wenwu Xu Institute of Information Engineering, Chinese Academy of Sciences and School of Cyberspace Security, University of Chinese Academy of Sciences, Peng Wang Institute of Information Engineering,Chinese Academy of Sciences, Haichao Shi Institute of Information Engineering,Chinese Academy of Sciences, Guoqiao Zhou Institute of Information Engineering,Chinese Academy of Sciences, Junliang Yao Institute of Information Engineering,Chinese Academy of Sciences, Xiao-Yu Zhang Institute of Information Engineering, Chinese Academy of Science
16:20
10m
Talk
Log Parsing using LLMs with Self-Generated In-Context Learning and Self-Correction
Research Track
Yifan Wu Peking University, Siyu Yu The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Ying Li School of Software and Microelectronics, Peking University, Beijing, China
16:30
10m
Talk
LLM-BL: Large Language Models are Zero-Shot Rankers for Bug Localization
Research Track
Zhengliang Li Nanjing University, Zhiwei Jiang Nanjing University, Qiguo Huang NanJing Audit University, Qing Gu Nanjing University
16:40
10m
Talk
Improved IR-based Bug Localization with Intelligent Relevance Feedback
Research Track
Asif Samir Dalhousie University, Masud Rahman Dalhousie University
16:50
10m
Talk
Towards Enhancing IR-based Bug Localization Leveraging Texts and Multimedia from Bug Reports
Early Research Achievements (ERA)
Shamima Yeasmin University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Canada, Kevin Schneider University of Saskatchewan, Masud Rahman Dalhousie University, Kartik Mittal University of Saskatchewan, Ryder Hardy University of Saskatchewan
17:00
10m
Talk
Building Bridges, Not Walls: Fairness-aware and Accurate Recommendation of Code Reviewers via LLM-based Agents Collaboration
Research Track
Luqiao Wang Xidian University, Qingshan Li Xidian University, Di Cui Xidian University, Mingkang Wang Xidian University, Yutong Zhao University of Central Missouri, Yongye Xu Xidian University, Huiying Zhuang Xidian University, Yangtao Zhou Xidian University, Lu Wang Xidian University
17:10
10m
Talk
Code Review Comprehension: Reviewing Strategies Seen Through Code Comprehension Theories
Research Track
Pavlina Wurzel Goncalves University of Zurich, Pooja Rani University of Zurich, Margaret-Anne Storey University of Victoria, Diomidis Spinellis Athens University of Economics and Business & Delft University of Technology, Alberto Bacchelli University of Zurich
17:20
10m
Live Q&A
Session's Discussion: "Log Parsing, Bug Localisation, Review Comprehension"
Research Track

:
:
:
: