SANER 2025
Tue 4 - Fri 7 March 2025 Montréal, Québec, Canada
Fri 7 Mar 2025 11:15 - 11:30 at M-1410 - Code Quality and Refactoring Chair(s): Wesley Assunção

The use of large language models (LLMs) to solve coding problems has gained significant attention recently. However, the effectiveness of LLMs in addressing code maintainability issues is still not fully understood. This study aims to evaluate how well an LLM can address maintainability problems in real projects. We use SonarQube to detect 127 maintainability issues across 10 GitHub repositories, corresponding to 10 SonarQube rules. Then, we ask state-of-the-art LLMs, Copilot Chat and Llama 3.1, to identify and fix these maintainability issues, generating 381 LLM-produced solutions. Their solutions are evaluated based on three criteria: compilation errors, test failures, and newly introduced maintainability issues. We assess a zero-shot prompting approach with Copilot Chat and Llama 3.1, and a few-shot prompting approach exclusively with Llama 3.1. We also conduct a user study to evaluate the readability of 54 LLM-generated solutions. Our findings show that out of the 127 maintainability issues, the Llama few-shot approach had the best success rate, fixing 57 (44.8%) methods, followed by Copilot Chat zero-shot with 41 (32.2%) methods, and Llama zero-shot with 38 (29.9%) methods, all without introducing any new errors. However, most of the solutions resulted in compilation errors, test failures, or newly introduced maintainability issues. In our user study, developers frequently noted improved source code readability after LLMs fixed the maintainability problems. Overall, our work demonstrates that while LLMs can successfully address some issues, their effectiveness in real-world projects is still limited. The risk of introducing new problems, such as compilation errors, test failures, and code degradation, underscores the need for developer oversight.

Fri 7 Mar

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
Code Quality and RefactoringResearch Papers / Reproducibility Studies and Negative Results (RENE) Track at M-1410
Chair(s): Wesley Assunção North Carolina State University
11:00
15m
Talk
Evaluating Software Development Agents: Patch Patterns, Code Quality, and Issue Complexity in Real-World GitHub Scenarios
Research Papers
Zhi Chen Singapore Management University, Lingxiao Jiang Singapore Management University
Pre-print
11:15
15m
Talk
Evaluating the Effectiveness of LLMs in Fixing Maintainability Issues in Real-World Projects
Research Papers
Henrique Gomes Nunes Federal University of Minas Gerais, Eduardo Figueiredo Federal University of Minas Gerais, Larissa Rocha State University of Bahia, Sarah Nadi New York University Abu Dhabi, Fischer Ferreira Federal University of Ceará, Geanderson Esteves dos Santos Federal University of Minas Gerais
11:30
15m
Talk
Exploring the Potential of Llama Models in Automated Code Refinement: A Replication Study
Research Papers
Genevieve Caumartin Concordia University, Qiaolin Qin Polytechnique Montréal, Heng Li Polytechnique Montréal, Diego Elias Costa Concordia University, Canada
Pre-print
11:45
15m
Talk
Exploring the Relationship between Technical Debt and Lead Time: An Industrial Case Study
Reproducibility Studies and Negative Results (RENE) Track
Bhuwan Paudel Blekinge Institute of Technology, Javier Gonzalez-Huerta Blekinge Institute of Technology, Ehsan Zabardast Nordea, Blekinge Institute of Technology, Eriks Klotins Blekinge Institute of Technology