Instructive Code Retriever: Learn from Large Language Model's Feedback for Code Intelligence Tasks
Recent studies proposed to leverage large language models (LLMs) with In-Context Learning (ICL) to handle code intelligence tasks without fine-tuning. ICL employs task instructions and a set of examples as demonstrations to guide the model in generating accurate answers without updating its parameters. While ICL has proven effective for code intelligence tasks, its performance heavily relies on the selected examples. Previous work has achieved some success in using BM25 to retrieve examples for code intelligence tasks. However, existing approaches lack the ability to understand the semantic and structural information of queries, resulting in less helpful demonstrations. Moreover, they do not adapt well to the complex and dynamic nature of user queries in diverse domains. In this paper, we introduce a novel approach named Instructive Code Retriever (ICR), which is designed to retrieve examples that enhance model inference across various code intelligence tasks and datasets. We enable ICR to learn the semantic and structural information of the corpus by a tree-based loss function. To better understand the correlation between queries and examples, we incorporate the feedback from LLMs to guide the training of the retriever. Experimental results demonstrate that our retriever significantly outperforms state-of-the-art approaches. We evaluate our model’s effectiveness on various tasks, i.e., code summarization, program synthesis, and bug fixing. Compared to previous state-of-the-art algorithms, our method achieved improvements of 50.0% and 90.0% in terms of BLEU-4 for two code summarization datasets, 76.2% CodeBLEU on program synthesis dataset, and increases of 3.6 and 3.2 BLEU-4 on two bug fixing datasets.
Wed 30 OctDisplayed time zone: Pacific Time (US & Canada) change
13:30 - 15:00 | LLM for SE 2NIER Track / Research Papers / Industry Showcase / Tool Demonstrations at Camellia Chair(s): Wenxi Wang University of Virgina | ||
13:30 15mTalk | A Systematic Evaluation of Large Code Models in API Suggestion: When, Which, and How Research Papers Chaozheng Wang The Chinese University of Hong Kong, Shuzheng Gao Chinese University of Hong Kong, Cuiyun Gao Harbin Institute of Technology, Wenxuan Wang Chinese University of Hong Kong, Chun Yong Chong Huawei, Shan Gao Huawei, Michael Lyu The Chinese University of Hong Kong | ||
13:45 15mTalk | AutoDW: Automatic Data Wrangling Leveraging Large Language Models Industry Showcase Lei Liu Fujitsu Laboratories of America, Inc., So Hasegawa Fujitsu Research of America Inc., Shailaja Keyur Sampat Fujitsu Research of America Inc., Maria Xenochristou Fujitsu Research of America Inc., Wei-Peng Chen Fujitsu Research of America, Inc., Takashi Kato Fujitsu Research, Taisei Kakibuchi Fujitsu Research, Tatsuya Asai Fujitsu Research | ||
14:00 15mTalk | Instructive Code Retriever: Learn from Large Language Model's Feedback for Code Intelligence Tasks Research Papers jiawei lu Zhejiang University, Haoye Wang Hangzhou City University, Zhongxin Liu Zhejiang University, Keyu Liang Zhejiang University, Lingfeng Bao Zhejiang University, Xiaohu Yang Zhejiang University | ||
14:15 15mTalk | WaDec: Decompile WebAssembly Using Large Language Model Research Papers Xinyu She Huazhong University of Science and Technology, Yanjie Zhao Huazhong University of Science and Technology, Haoyu Wang Huazhong University of Science and Technology | ||
14:30 10mTalk | LLM4Workflow: An LLM-based Automated Workflow Model Generation Tool Tool Demonstrations | ||
14:40 10mTalk | GPTZoo: A Large-scale Dataset of GPTs for the Research Community NIER Track Xinyi Hou Huazhong University of Science and Technology, Yanjie Zhao Huazhong University of Science and Technology, Shenao Wang Huazhong University of Science and Technology, Haoyu Wang Huazhong University of Science and Technology | ||
14:50 10mTalk | Emergence of A Novel Domain Expert: A Generative AI-based Framework for Software Function Point Analysis NIER Track |