ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

This program is tentative and subject to change.

Mon 17 Nov 2025 16:10 - 16:20 at Grand Hall 4 - Code Review & Process

Large language model (LLM)-powered code review automation tools have been introduced to generate code review comments. However, not all generated comments will drive code changes. Understanding what types of generated review comments are likely to trigger code changes is crucial for identifying those that are actionable. In this paper, we set out to investigate (1) the types of review comments written by humans and LLMs, and (2) the types of generated comments that are most frequently resolved by developers. To do so, we developed an LLM-as-a-Judge to automatically classify review comments based on our own taxonomy of five categories. Our empirical study confirms that (1) the LLM reviewer and human reviewers exhibit distinct strengths and weaknesses depending on the project context, and (2) readability, bugs, and maintainability-related comments had higher resolution rates than those focused on code design. These results suggest that a substantial proportion of LLM-generated comments are actionable and can be resolved by developers. Our work highlights the complementarity between LLM and human reviewers and offers suggestions to improve the practical effectiveness of LLM-powered code review tools.

This program is tentative and subject to change.

Mon 17 Nov

Displayed time zone: Seoul change

16:00 - 17:00
Code Review & ProcessIndustry Showcase / NIER Track at Grand Hall 4
16:00
10m
Talk
SGCR: A Specification-Grounded Framework for Trustworthy LLM Code Review
Industry Showcase
Kai Wang HiThink Research, Bingcheng Mao HiThink Research, Shuai Jia HiThink Research, Yujie Ding HiThink Research, Dongming Han HiThink Research, Tianyi Ma HiThink Research, Bin Cao Zhejiang University of Technology
16:10
10m
Talk
What Types of Code Review Comments Do Developers Most Frequently Resolve?
Industry Showcase
Saul Goldman The University of Melbourne, Hong Yi Lin The University of Melbourne, Jirat Pasuksmit Atlassian, Patanamon Thongtanunam University of Melbourne, Kla Tantithamthavorn Monash University and Atlassian, Zhe Wang Institute of Computing Technology at Chinese Academy of Sciences; Zhongguancun Laboratory, Ruixiong Zhang Atlassian, Ali Behnaz Atlassian, Fan Jiang Atlassian, Michael Siers Atlassian, Ryan Jiang Atlassian, Mike Buller Atlassian, Minwoo Jeong Atlassian, Ming Wu Atlassian
16:20
10m
Talk
Vessel: A Taxonomy of Reproducibility Issues for Container Images
NIER Track
Kevin Pitstick Carnegie Mellon Software Engineering Institute, Alex Derr Carnegie Mellon Software Engineering Institute, Lihan Zhan Carnegie Mellon Software Engineering Institute, Sebastian Echeverria Carnegie Mellon Software Engineering Institute
16:30
10m
Talk
From Modules to Marketplaces: A Vision for Composable Capability Sharing Across Organizations
NIER Track
Wei-Ji Wang National Taiwan University & Chunghwa Telecom Laboratories
16:40
10m
Talk
Towards Automated Governance: A DSL for Human-Agent Collaboration in Software Projects
NIER Track
Adem Ait University of Luxembourg, Gwendal Jouneaux Luxembourg Institute of Science and Technology, Javier Luis Cánovas Izquierdo Universitat Oberta de Catalunya, Jordi Cabot Luxembourg Institute of Science and Technology
Pre-print
16:50
10m
Talk
iCodeReviewer: Improving Secure Code Review with Mixture of Prompts
Industry Showcase
Yun Peng The Chinese University of Hong Kong, Kisub Kim DGIST, Linghan Meng Huawei, Kui Liu Huawei