SCAM 2025
Sun 7 - Fri 12 September 2025 Auckland, New Zealand
co-located with ICSME 2025
Mon 8 Sep 2025 13:30 - 13:52 at OGGB5 260-051 - LLMs Chair(s): Jens Dietrich

Code review is a crucial practice in software development. As code review nowadays is lightweight, various issues can be identified, and sometimes, they can be trivial. Research has investigated automated approaches to classify review comments to gauge the effectiveness of code reviews. However, previous studies have primarily relied on supervised machine learning, which requires extensive manual annotation to train the models effectively. To address this limitation, we explore the potential of using Large Language Models (LLMs) to classify code review comments. We assess the performance of LLMs to classify 17 categories of code review comments. Our results show that LLMs can classify code review comments, outperforming the state-of-the-art approach using a trained deep learning model. In particular, LLMs achieve better accuracy in classifying the five most useful categories, which the state-of-the-art approach struggles with due to low training examples. Rather than relying solely on a specific small training data distribution, our results show that LLMs provide balanced performance across high- and low-frequency categories. These results suggest that the LLMs could offer a scalable solution for code review analytics to improve the effectiveness of the code review process.

Mon 8 Sep

Displayed time zone: Auckland, Wellington change

13:30 - 15:00
LLMsResearch Track at OGGB5 260-051
Chair(s): Jens Dietrich Victoria University of Wellington
13:30
22m
Research paper
Exploring the Potential of Large Language Models in Fine-Grained Review Comment Classification
Research Track
Linh Nguyen The University of Melbourne, Chunhua Liu The University of Melbourne, Hong Yi Lin The University of Melbourne, Patanamon Thongtanunam University of Melbourne
Pre-print
13:52
22m
Research paper
Language-Agnostic Generation of Header Comments using Large Language Models
Research Track
Nathanael Yao Queen's University, Juergen Dingel Queen's University, Ali Tizghadam TELUS, Ibrahim Amer Queen's University
14:15
22m
Research paper
Smelling Secrets: Leveraging Machine Learning and Language Models for Sensitive Parameter Detection in Ansible Security Analysis
Research Track
Ruben Opdebeeck Vrije Universiteit Brussel, Valeria Pontillo Gran Sasso Science Institute, Camilo Velázquez-Rodríguez Vrije Universiteit Brussel, Wolfgang De Meuter Vrije Universiteit Brussel, Coen De Roover Vrije Universiteit Brussel
Pre-print File Attached
14:37
22m
Research paper
Testing the Untestable? An Empirical Study on the Testing Process of LLM-Powered Software Systems
Research Track
Cleyton V. C. de Magalhaes CESAR School, Italo Santos University of Hawai‘i at Mānoa, Brody Stuart-Verner University of Calgary, Ronnie de Souza Santos University of Calgary
Pre-print