Large language models such as Codex, have shown the capability to produce code for many programming tasks. However, the success rate of existing models is low, especially for complex programming tasks. One of the reasons is that language models lack awareness of program semantics, resulting in incorrect programs, or even programs which do not compile. In this paper, we systematically study whether automated program repair (APR) techniques can fix the incorrect solutions produced by language models in LeetCode contests. The goal is to study whether APR techniques can enhance reliability in the code produced by large language models. Our study revealed that: (1) automatically generated code shares common programming mistakes with human-crafted solutions, indicating APR techniques may have potential to fix auto-generated code; (2) given bug location information provided by a statistical fault localization approach, the newly released Codex edit mode, which supports editing code, is similar to or better than existing Java repair tools TBar and Recoder in fixing incorrect solutions. By analyzing the experimental results generated by these tools, we provide several suggestions: (1) enhancing APR tools to surpass limitations in patch space (e.g., introducing more flexible fault localization) is desirable; (2) as large language models can derive more fix patterns by training on more data, future APR tools could shift focus from adding more fix patterns to synthesis/semantics based approaches, (3) combination of language models with APR to curate patch ingredients, is worth studying.
Thu 18 MayDisplayed time zone: Hobart change
13:45 - 15:15 | Program repair with and for AITechnical Track / Journal-First Papers / DEMO - Demonstrations at Meeting Room 102 Chair(s): Julia Rubin University of British Columbia, Canada | ||
13:45 15mTalk | Impact of Code Language Models on Automated Program Repair Technical Track Nan Jiang Purdue University, Kevin Liu Lynbrook High School, Thibaud Lutellier University of Alberta, Lin Tan Purdue University Pre-print | ||
14:00 15mTalk | Tare: Type-Aware Neural Program Repair Technical Track Qihao Zhu Peking University, Zeyu Sun Zhongguancun Laboratory, Wenjie Zhang Peking University, Yingfei Xiong Peking University, Lu Zhang Peking University | ||
14:15 15mTalk | Template-based Neural Program Repair Technical Track Xiangxin Meng Beihang University, Beijing, China, Xu Wang Beihang University, Hongyu Zhang The University of Newcastle, Hailong Sun School of Computer Science and Engineering, Beihang University, Beijing,China, Xudong Liu Beihang University, Chunming Hu Beihang University Pre-print | ||
14:30 15mTalk | Automated Repair of Programs from Large Language Models Technical Track Zhiyu Fan National University of Singapore, Singapore, Xiang Gao Beihang University, China, Martin Mirchev National University of Singapore, Abhik Roychoudhury National University of Singapore, Shin Hwei Tan Southern University of Science and Technology | ||
14:45 15mTalk | Automated Program Repair in the Era of Large Pre-trained Language Models Technical Track Chunqiu Steven Xia University of Illinois at Urbana-Champaign, Yuxiang Wei University of Illinois at Urbana-Champaign, Lingming Zhang University of Illinois at Urbana-Champaign | ||
15:00 7mTalk | AIREPAIR: A Repair Platform for Neural Networks DEMO - Demonstrations Xidan Song Department of Computer Science, University of Manchester, UK, Youcheng Sun The University of Manchester, Mustafa A. Mustafa Department of Computer Science, University of Manchester, UK, imec-COSIC, KU Leuven, Belgium, Lucas C. Cordeiro University of Manchester | ||
15:07 7mTalk | Arachne: Search Based Repair of Deep Neural Networks Journal-First Papers Link to publication DOI Pre-print |