Exploring Parameter-Efficient Fine-Tuning of Large Language Model on Automated Program Repair
Automated Program Repair (APR) aims to fix bugs by generating patches. And existing work has demonstrated that “pre-training and fine-tuning” paradigm enables Large Language Models (LLMs) improve fixing capabilities on APR. However, existing work mainly focuses on Full-Model Fine-Tuning (FMFT) for APR and limited research has been conducted on the execution-based evaluation of Parameter-Efficient Fine-Tuning (PEFT) for APR. Comparing to FMFT, PEFT can reduce computing resource consumption without compromising performance and has been widely adopted to other software engineering tasks.
To fill this gap, we enhance the existing APR dataset by employing prompt engineering to create an instruction dataset, APR-INSTRUCTION, at first. Secondly, we fine-tune four pre-trained LLMs using four different PEFT methods with APR-INSTRUCTION. The best fine-tuned model fixes 58% more bugs than the state-of-the-art LLM-based APR techniques. The results also show that $(IA)^3$ improves the creativity of LLMs more effectively through fine-tuning and achieves the highest fixing capability compared to the other three PEFT methods. Thirdly, we explore the optimal configuration of PEFT hyperparameters, and assess the impact of instruction dataset size, showing that a larger number of parameters and a larger training dataset do not necessarily result in better performance for PEFT. Lastly, we analyze peak memory usage and trainable parameters to show the efficiency of PEFT.
This work provides a comprehensive exploration of PEFT on APR and suggests potentially promising directions for extension to other software engineering downstream tasks. APR-INSTRUCTION, PEFT weights, and the fine-tuning code are publicly available as open-source resources.
Wed 30 OctDisplayed time zone: Pacific Time (US & Canada) change
15:30 - 16:30 | |||
15:30 15mTalk | Repairing Regex-Dependent String Functions Research Papers | ||
15:45 15mTalk | FastFixer: An Efficient and Effective Approach for Repairing Programming Assignments Research Papers Fang Liu Beihang University, Zhenwei Liu Beihang University, Qianhui Zhao Beihang University, Jing Jiang Beihang University, Li Zhang Beihang University, Zian Sun Beihang University, Ge Li Peking University, Zhongqi Li Huawei Cloud Computing Technologies Co., Ltd., Yuchi Ma Huawei Cloud Computing Technologies | ||
16:00 15mTalk | Exploring Parameter-Efficient Fine-Tuning of Large Language Model on Automated Program Repair Research Papers Guochang Li Zhejiang University, Chen Zhi Zhejiang University, Jialiang Chen Zhejiang University, Junxiao Han , Shuiguang Deng Zhejiang University; Alibaba-Zhejiang University Joint Institute of Frontier Technologies | ||
16:15 15mTalk | Enhancing Automated Program Repair with Solution Design Research Papers Jiuang Zhao Beihang University, Donghao Yang Beihang University, Li Zhang Beihang University, Xiaoli Lian Beihang University, China, Zitian Yang Beihang University, Fang Liu Beihang University |