The Impact of Fine-tuning Large Language Models on Automated Program Repair
Automated Program Repair (APR) uses various tools and techniques to help developers achieve functional and error-free code faster. In recent years, Large Language Models (LLMs) have gained popularity as components in APR tool chains because of their performance and flexibility. However, training such models requires a significant amount of resources. Fine-tuning techniques have been developed to adapt pre-trained LLMs to specific tasks, such as APR, and enhance their performance at far lower computational costs than training from scratch.
In this study, we empirically investigate the impact of various fine-tuning techniques on the performance of LLMs used for APR. Our experiments provide insights into the performance of a selection of state-of-the-art LLMs pre-trained on code. The evaluation is done on three popular APR benchmarks (i.e., QuixBugs, Defects4j and HumanEval-Java) and considers six different LLMs with varying parameter sizes (resp. CodeGen, CodeT5, StarCoder, DeepSeekCoder, Bloom, and CodeLlama-2). We consider three training regiments: no fine-tuning, full fine-tuning, and parameter-efficient fine-tuning (PEFT) using LoRA and IA3. We observe that full fine-tuning techniques decrease the benchmarking performance of various models due to different data distributions and overfitting. By using parameter-efficient fine-tuning methods, we restrict models in the amount of trainable parameters and achieve better results.
Wed 10 SepDisplayed time zone: Auckland, Wellington change
15:30 - 17:00 | Session 5 - DebuggingResearch Papers Track / Industry Track at Case Room 3 260-055 Chair(s): Chanchal K. Roy University of Saskatchewan | ||
15:30 15m | The Impact of Fine-tuning Large Language Models on Automated Program Repair Research Papers Track Roman Machacek University of Bern, Anastasiia Grishina Simula Research Laboratory, Max Hort Simula Research Laboratory, Leon Moonen Simula Research Laboratory Pre-print Media Attached | ||
15:45 15m | Bridging Solidity Evolution Gaps: An LLM-Enhanced Approach for Smart Contract Compilation Error Resolution Research Papers Track Likai Ye Zhejiang University, Mengliang Li Zhejiang University, Dehai Zhao CSIRO's Data61, Jiamou Sun CSIRO's Data61, Xiaoxue Ren Zhejiang University Pre-print | ||
16:00 15m | Code Property Graph Meets Typestate: A Scalable Framework to Behavioral Bug Detection Research Papers Track Xingjing Deng Beihang University, Zhengyao Liu Beihang University, Zhong Xitong Beihang University, shuo hong Beihang University, Yixin Yang , Xiang Gao Beihang University, Yan Xuhui Huawei, Hailong Sun Beihang University | ||
16:15 15m | Syntest-ACR: Automated Crash Reproduction for JavaScript Research Papers Track Philip Oliver Victoria University of Wellington, Jens Dietrich Victoria University of Wellington, Craig Anslow Victoria University of Wellington, Michael Homer Victoria University of Wellington File Attached | ||
16:30 15m | TSGuard: Detecting Logic Bugs in Time Series Management Systems via Time Series Algebra Research Papers Track Lingwei Kuang Nanjing University of Aeronautics and Astronautics, Liang Liu Nanjing University of Aeronautics and Astronautics, Wenjing Wang Nanjing University of Aeronautics and Astronautics, Ning Cao Nanjing University of Aeronautics and Astronautics, Shijie Li Nanjing University of Aeronautics and Astronautics, Fan Liu Nanjing University of Aeronautics and Astronautics, Haolong Chen Nanjing University of Aeronautics and Astronautics | ||
16:45 15m | HybridRCA: Lightweight Critical-Path-Aware Hybrid Tracing for Root-Cause Analysis in Production Microservices Industry Track Maryam Ekhlasi Ciena, Arnaud Fiorini Polytechnique Montreal, Naser Ezzati Jivan , Michel Dagenais Polytechnique Montreal, Maxime Lamothe Polytechnique Montreal |