Evaluating the Effectiveness of LLMs in Fixing Maintainability Issues in Real-World Projects
The use of large language models (LLMs) to solve coding problems has gained significant attention recently. However, the effectiveness of LLMs in addressing code maintainability issues is still not fully understood. This study aims to evaluate how well an LLM can address maintainability problems in real projects. We use SonarQube to detect 127 maintainability issues across 10 GitHub repositories, corresponding to 10 SonarQube rules. Then, we ask state-of-the-art LLMs, Copilot Chat and Llama 3.1, to identify and fix these maintainability issues, generating 381 LLM-produced solutions. Their solutions are evaluated based on three criteria: compilation errors, test failures, and newly introduced maintainability issues. We assess a zero-shot prompting approach with Copilot Chat and Llama 3.1, and a few-shot prompting approach exclusively with Llama 3.1. We also conduct a user study to evaluate the readability of 54 LLM-generated solutions. Our findings show that out of the 127 maintainability issues, the Llama few-shot approach had the best success rate, fixing 57 (44.8%) methods, followed by Copilot Chat zero-shot with 41 (32.2%) methods, and Llama zero-shot with 38 (29.9%) methods, all without introducing any new errors. However, most of the solutions resulted in compilation errors, test failures, or newly introduced maintainability issues. In our user study, developers frequently noted improved source code readability after LLMs fixed the maintainability problems. Overall, our work demonstrates that while LLMs can successfully address some issues, their effectiveness in real-world projects is still limited. The risk of introducing new problems, such as compilation errors, test failures, and code degradation, underscores the need for developer oversight.