Commit Message Generation via ChatGPT: How Far Are We?New Idea Paper
Commit messages concisely describe code changes in natural language and are important for software maintenance. Various automatic commit message generation approaches have been proposed, such as retrieval-based, learning-based, and hybrid approaches. Recently, large language models have shown impressive performance in many natural language processing tasks. Among them, ChatGPT is the most popular one and has attracted wide attention from the software engineering community. ChatGPT demonstrates the ability of in-context learning (ICL), which allows ChatGPT to perform downstream tasks by learning from just a few demonstrations without explicit model tuning. However, it remains unclear how well ChatGPT performs in the commit message generation task via ICL. Therefore, in this paper, we conduct a preliminary evaluation of ChatGPT with ICL on commit message generation. Specifically, we first explore the impact of two key settings on the performance of ICL on commit message generation. Then, based on the best settings, we compare ChatGPT with several state-of-the-art approaches. The results show that a carefully-designed demonstration can lead to substantial improvements for ChatGPT on commit message generation. Furthermore, ChatGPT outperforms all the retrieval-based and learning-based approaches in terms of BLEU, METEOR, ROUGE-L, and Cider, and is comparable to hybrid approaches. Based on our findings, we outline several open challenges and opportunities for ChatGPT-based commit message generation.
Sun 14 AprDisplayed time zone: Lisbon change
16:00 - 17:30 | FORGE2024 Awards & Foundation Models for Code and Documentation GenerationResearch Track at Luis de Freitas Branco Chair(s): Antonio Mastropaolo Università della Svizzera italiana | ||
16:00 10mAwards | Award Ceremony Research Track | ||
16:10 7mShort-paper | Fine Tuning Large Language Model for Secure Code GenerationNew Idea Paper Research Track Junjie Li Concordia University, Aseem Sangalay Delhi Technological University, Cheng Cheng Concordia University, Yuan Tian Queen's University, Kingston, Ontario, Jinqiu Yang Concordia University | ||
16:17 14mFull-paper | Investigating the Performance of Language Models for Completing Code in Functional Programming Languages: a Haskell Case StudyFull Paper Research Track Tim van Dam Delft University of Technology, Frank van der Heijden Delft University of Technology, Philippe de Bekker Delft University of Technology, Berend Nieuwschepen Delft University of Technology, Marc Otten Delft University of Technology, Maliheh Izadi Delft University of Technology | ||
16:31 7mShort-paper | On Evaluating the Efficiency of Source Code Generated by LLMsNew Idea Paper Research Track Changan Niu Software Institute, Nanjing University, Ting Zhang Singapore Management University, Chuanyi Li Nanjing University, Bin Luo Nanjing University, Vincent Ng Human Language Technology Research Institute, University of Texas at Dallas, Richardson, TX 75083-0688 | ||
16:38 14mFull-paper | PathOCL: Path-Based Prompt Augmentation for OCL Generation with GPT-4Full Paper Research Track Seif Abukhalaf Polytechnique Montreal, Mohammad Hamdaqa Polytechnique Montréal, Foutse Khomh École Polytechnique de Montréal | ||
16:52 7mShort-paper | Creative and Correct: Requesting Diverse Code Solutions from AI Foundation ModelsNew Idea Paper Research Track Scott Blyth Monash University, Christoph Treude Singapore Management University, Markus Wagner Monash University, Australia | ||
16:59 7mShort-paper | Commit Message Generation via ChatGPT: How Far Are We?New Idea Paper Research Track | ||
17:06 24mOther | Discussion Research Track |