A Deep Dive into Retrieval-Augmented Generation for Code Completion: Experience on WeChat
Code completion, a crucial task in software engineering that enhances developer productivity, has seen substantial improvements with the rapid advancement of large language models (LLMs). In recent years, retrieval-augmented generation (RAG) has emerged as a promising method to enhance the code completion capabilities of LLMs, which leverages relevant context from codebases without requiring model retraining. While existing studies have demonstrated the effectiveness of RAG on public repositories and benchmarks, the potential distribution shift between open-source and closed-source codebases presents unique challenges that remain unexplored. To mitigate the gap, we conduct an empirical study to investigate the performance of widely-used RAG methods for code completion in the industrial-scale codebase of WeChat, one of the largest proprietary software systems. Specifically, we extensively explore two main types of RAG methods, namely identifier-based RAG and similarity-based RAG, across 26 open-source LLMs ranging from 0.5B to 671B parameters. For a more comprehensive analysis, we employ different retrieval techniques for similarity-based RAG, including lexical and semantic retrieval. Based on 1,669 internal repositories, we achieve several key findings: (1) both RAG methods demonstrate effectiveness in closed-source repositories, with similarity-based RAG showing superior performance, (2) the effectiveness of similarity-based RAG improves with more advanced retrieval techniques, where BM25 (lexical retrieval) and GTE-Qwen (semantic retrieval) achieve superior performance, and (3) the combination of lexical and semantic retrieval techniques yields optimal results, demonstrating complementary strengths. Furthermore, we conduct a developer survey to validate the practical utility of RAG methods in real-world development environments.
Fri 12 SepDisplayed time zone: Auckland, Wellington change
10:30 - 12:00 | Session 13 - Reuse 1NIER Track / Research Papers Track / Industry Track / Registered Reports at Case Room 3 260-055 Chair(s): Banani Roy University of Saskatchewan | ||
10:30 15m | From Release to Adoption: Challenges in Reusing Pre-trained AI Models for Downstream Developers Research Papers Track Peerachai Banyongrakkul The University of Melbourne, Mansooreh Zahedi The Univeristy of Melbourne, Patanamon Thongtanunam University of Melbourne, Christoph Treude Singapore Management University, Haoyu Gao The University of Melbourne Pre-print | ||
10:45 15m | Are Classical Clone Detectors Good Enough For the AI Era? Research Papers Track Ajmain Inqiad Alam University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Farouq Al-Omari Thompson Rivers University, Chanchal K. Roy University of Saskatchewan, Banani Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan | ||
11:00 10m | Can LLMs Write CI? A Study on Automatic Generation of GitHub Actions Configurations NIER Track Taher A. Ghaleb Trent University, Dulina Rathnayake Department of Computer Science, Trent University, Peterborough, Canada Pre-print | ||
11:10 10m | A Preliminary Study on Large Language Models Self-Negotiation in Software Engineering NIER Track Chunrun Tao Kyushu University, Honglin Shu Kyushu University, Masanari Kondo Kyushu University, Yasutaka Kamei Kyushu University | ||
11:20 10m | CIgrate: Automating CI Service Migration with Large Language Models Registered Reports Md Nazmul Hossain Department of Computer Science, Trent University, Peterborough, Canada, Taher A. Ghaleb Trent University Pre-print | ||
11:30 15m | A Deep Dive into Retrieval-Augmented Generation for Code Completion: Experience on WeChat Industry Track Zezhou Yang Tencent Inc., Ting Peng Tencent Inc., Cuiyun Gao Harbin Institute of Technology, Shenzhen, Chaozheng Wang The Chinese University of Hong Kong, Hailiang Huang Tencent Inc., Yuetang Deng Tencent | ||
11:45 10m | Inferring Attributed Grammars from Parser Implementations NIER Track Andreas Pointner University of Applied Sciences Upper Austria, Hagenberg, Austria, Josef Pichler University of Applied Sciences Upper Austria, Herbert Prähofer Johannes Kepler University Linz Pre-print | ||