Enhancing LLM-Based Coding Tools through Native Integration of IDE-Derived Static Context
Large Language Models (LLMs) have achieved remarkable success in code completion, as evidenced by their essential roles in developing code assistant services such as Copilot. Being trained on in-file contexts, current LLMs are quite effective in completing code for single source files. However, it is challenging for them to conduct repository-level code completion for large software projects that require cross-file information. Existing research on LLM-based repository-level code completion identifies and integrates cross-file contexts, but it suffers from low accuracy and limited context length of LLMs. In this paper, we find that Integrated Development Environments (IDEs) can provide direct, accurate and real time cross-file information for repository-level code completion. We propose IDECoder, a practical framework that leverages IDE native static contexts for cross-context construction and diagnosis results for self-refinement. IDECoder utilizes the rich cross-context information available in IDEs to enhance the capabilities of LLMs of repository-level code completion. We conducted preliminary experiments to validate the performance of IDECoder and observed that this synergy represents a promising trend for future exploration.
Sat 20 AprDisplayed time zone: Lisbon change
14:00 - 15:30 | Session 3: Keynote 2 + Position PapersLLM4Code at Luis de Freitas Branco Chair(s): Lingming Zhang University of Illinois at Urbana-Champaign | ||
14:00 50mKeynote | Open development of Large Language Models for code with BigCode and StarCoder2 LLM4Code Loubna Ben Allal Hugging Face | ||
14:50 8mTalk | Benchmarking the Security Aspect of Large Language Model-Based Code Generation LLM4Code Pre-print | ||
14:58 8mTalk | Enhancing LLM-Based Coding Tools through Native Integration of IDE-Derived Static Context LLM4Code Yichen LI The Chinese University of Hong Kong, Yun Peng The Chinese University of Hong Kong, Yintong Huo The Chinese University of Hong Kong, Michael Lyu The Chinese University of Hong Kong Pre-print | ||
15:06 8mTalk | Evaluating Fault Localization and Program Repair Capabilities of Existing Closed-Source General-Purpose LLMs LLM4Code Shengbei Jiang Beijing Jiaotong University, Jiabao Zhang Beijing Jiaotong University, Wei Chen Beijing Jiaotong University, Bo Wang Beijing Jiaotong University, Jianyi Zhou Huawei Cloud Computing Technologies Co., Ltd., Jie M. Zhang King's College London Pre-print | ||
15:14 8mTalk | MoonBit: Explore the Design of an AI-Friendly Programming Language LLM4Code Haoxiang Fei International Digital Economy Academy, Yu Zhang International Digital Economy Academy, Hongbo Zhang International Digital Economy Academy, Yanlin Wang Sun Yat-sen University, Qing Liu International Digital Economy Academy Pre-print | ||
15:22 8mTalk | Toward a New Era of Rapid Development: Assessing GPT-4-Vision's Capabilities in UML-Based Code Generation LLM4Code Gabor Antal University of Szeged, Richárd Vozár Department of Software Engineering, University of Szeged, Hungary, Rudolf Ferenc University of Szeged |