ICSME 2025
Sun 7 - Fri 12 September 2025 Auckland, New Zealand

This program is tentative and subject to change.

Fri 12 Sep 2025 11:10 - 11:20 at Case Room 3 260-055 - Session 13 - Reuse 1 Chair(s): Banani Roy

Large Language Models (LLMs) have shown great potential in code-related software engineering tasks, including code generation, classification, and understanding. While current research primarily focuses on direct inference with single LLMs, this approach may fall short for complex tasks due to ambiguous instructions. To address this limitation, we propose LLM self-negotiation, where multiple LLMs collaborate and debate to reach consensus on code-related tasks. This approach aims to better handle unclear instructions and improve overall effectiveness. We evaluated LLM self-negotiation in three key software engineering domains: Equivalent Mutant Detection (EMD), Automated Vulnerability Detection (AVD), and Automated Program Repair (APR). These domains represent distinct aspects of code-related tasks: functionality understanding, code classification, and code generation, respectively. Our experimental results revealed varying effectiveness across domains. In EMD, LLM self-negotiation demonstrated remarkable improvements, with most models showing performance gains between 114.72% and 351.01% (though CodeLlama experienced a minor 4.5% decrease in F1-score). For APR tasks, self-negotiation performed comparably to single LLM implementations. However, in AVD, the results were mixed - while Vicuna showed improved F1-scores, most models exhibited lower recall rates. These findings indicate that LLM self-negotiation is particularly promising for functionality understanding tasks, while its application to code classification and generation requires further research and refinement.

This program is tentative and subject to change.

Fri 12 Sep

Displayed time zone: Auckland, Wellington change

10:30 - 12:00
Session 13 - Reuse 1NIER Track / Research Papers Track / Industry Track / Registered Reports at Case Room 3 260-055
Chair(s): Banani Roy University of Saskatchewan
10:30
15m
From Release to Adoption: Challenges in Reusing Pre-trained AI Models for Downstream Developers
Research Papers Track
Peerachai Banyongrakkul The University of Melbourne, Mansooreh Zahedi The Univeristy of Melbourne, Patanamon Thongtanunam University of Melbourne, Christoph Treude Singapore Management University, Haoyu Gao The University of Melbourne
Pre-print
10:45
15m
Are Classical Clone Detectors Good Enough For the AI Era?
Research Papers Track
Ajmain Inqiad Alam University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Farouq Al-Omari Thompson Rivers University, Chanchal K. Roy University of Saskatchewan, Banani Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan
11:00
10m
Can LLMs Write CI? A Study on Automatic Generation of GitHub Actions Configurations
NIER Track
Taher A. Ghaleb Trent University, Dulina Rathnayake Department of Computer Science, Trent University, Peterborough, Canada
Pre-print
11:10
10m
A Preliminary Study on Large Language Models Self-Negotiation in Software Engineering
NIER Track
Chunrun Tao Kyushu University, Honglin Shu Kyushu University, Masanari Kondo Kyushu University, Yasutaka Kamei Kyushu University
11:20
10m
CIgrate: Automating CI Service Migration with Large Language Models
Registered Reports
Md Nazmul Hossain Department of Computer Science, Trent University, Peterborough, Canada, Taher A. Ghaleb Trent University
Pre-print
11:30
15m
A Deep Dive into Retrieval-Augmented Generation for Code Completion: Experience on WeChat
Industry Track
Zezhou Yang Tencent Inc., Ting Peng Tencent Inc., Cuiyun Gao Harbin Institute of Technology, Chaozheng Wang The Chinese University of Hong Kong, Hailiang Huang Tencent Inc., Yuetang Deng Tencent
11:45
10m
Inferring Attributed Grammars from Parser Implementations
NIER Track
Andreas Pointner University of Applied Sciences Upper Austria, Hagenberg, Austria, Josef Pichler University of Applied Sciences Upper Austria, Herbert Prähofer Johannes Kepler University Linz
Pre-print