Aligning LLMs to Fully Utilize the Cross-file Context in Repository-level Code Completion
This program is tentative and subject to change.
Large Language Models (LLMs) have shown promising results in repository-level code completion, which completes code based on the in-file and cross-file context of a repository. The cross-file context typically contains different types of information (e.g., relevant APIs and similar code) and is lengthy. In this paper, we found that LLMs struggle to fully utilize the information in the cross-file context. We hypothesize that one of the root causes of the limitation is the misalignment between pre-training (i.e., relying on nearby context) and repo-level code completion (i.e., frequently attending to long-range cross-file context).
To address the above misalignment, we propose \textbf{Co}de \textbf{L}ong-context \textbf{A}lignment - CoLA, a purely data-driven approach to explicitly teach LLMs to focus on the cross-file context. Specifically, CoLA constructs a large-scale repo-level code completion dataset - CoLA-132K, where each sample contains the long cross-file context (up to 128K tokens) and requires generating context-aware code (i.e., cross-file API invocations and code spans similar to cross-file context). Through a two-stage training pipeline upon CoLA-132K, LLMs learns the capability of finding relevant information in the cross-file context, thus aligning LLMs with repo-level code completion. We apply CoLA to multiple popular LLMs (e.g., aiXcoder-7B) and extensive experiments on CoLA-132K and a public benchmark - CrossCodeEval. Our experiments yield the following results. (1) \textit{Effectiveness.} CoLA substantially improves the performance of multiple LLMs in repo-level code completion. For example, it improves aiXcoder-7B by up to 19.7% in exact match. (2) \textit{Generalizability.} The capability learned by CoLA can generalize to new languages (i.e., languages not in training data). (3) \textit{Enhanced Context Utilization Capability.} We design two probing experiments, which show CoLA improves the capability of LLMs in utilizing the information (i.e., relevant APIs and similar code) in cross-file context. Our datasets and model weights are released in the replication package.
This program is tentative and subject to change.
Tue 18 NovDisplayed time zone: Seoul change
11:00 - 12:30 | |||
11:00 10mTalk | Learning Project-wise Subsequent Code Edits via Interleaving Neural-based Induction and Tool-based Deduction Research Papers Chenyan Liu Shanghai Jiao Tong University; National University of Singapore, Yun Lin Shanghai Jiao Tong University, Yuhuan Huang Shanghai Jiao Tong University, Jiaxin Chang Shanghai Jiao Tong University, Binhang Qi National University of Singapore, Bo Jiang Bytedance Network Technology, Zhiyong Huang National University of Singapore, Jin Song Dong National University of Singapore | ||
11:10 10mTalk | Coding-Fuse: Efficient Fusion of Code Pre‑Trained Models for Classification Tasks Research Papers Yu Zhao , Lina Gong Nanjing University of Aeronautics and Astronautic, Zhiqiu Huang Nanjing University of Aeronautics and Astronautics, Yuchen Jin Nanjing University of Aeronautics and Astronautics, Mingqiang Wei Nanjing University of Aeronautics and Astronautics | ||
11:20 10mTalk | SE-Jury: An LLM-as-Ensemble-Judge Metric for Narrowing the Gap with Human Evaluation in SE Research Papers Xin Zhou Singapore Management University, Singapore, Kisub Kim DGIST, Ting Zhang Monash University, Martin Weyssow Singapore Management University, Luis F. Gomes Carnegie Mellon University, Guang Yang , Kui Liu Huawei, Xin Xia Zhejiang University, David Lo Singapore Management University | ||
11:30 10mTalk | iKnow: an Intent-Guided Chatbot for Cloud Operations with Retrieval-Augmented Generation Research Papers Junjie Huang The Chinese University of Hong Kong, Yuedong Zhong Sun Yat-sen University, Guangba Yu The Chinese University of Hong Kong, Zhihan Jiang The Chinese University of Hong Kong, Minzhi Yan HCC Lab, Huawei Cloud Computing Technology Co., Ltd, Wenfei Luan HCC Lab, Huawei Cloud Computing Technology Co., Ltd, Tianyu Yang HCC Lab, Huawei Cloud Computing Technology Co., Ltd, Rui Ren Computing and Networking Innovation Lab, Huawei Cloud Computing Technology Co., Ltd, Michael Lyu The Chinese University of Hong Kong | ||
11:40 10mTalk | Aligning LLMs to Fully Utilize the Cross-file Context in Repository-level Code Completion Research Papers Jia Li Tsinghua University, Hao Zhu Peking University, Huanyu Liu , Xianjie Shi Peking University, He Zong aiXcoder, Yihong Dong Peking University, Kechi Zhang Peking University, China, Siyuan Jiang , Zhi Jin Peking University, Ge Li Peking University | ||
11:50 10mTalk | From Sparse to Structured: A Diffusion-Enhanced and Feature-Aligned Framework for Coincidental Correctness Detection Research Papers Huan Xie Chongqing University, Chunyan Liu Chongqing University, Yan Lei Chongqing University, Zhenyu Wu School of Big Data & Software Engineering, Chongqing University, Jinping Wang Chonqing University | ||
12:00 10mTalk | Watson: A Cognitive Observability Framework for the Reasoning of LLM-Powered Agents Research Papers Benjamin Rombaut Centre for Software Excellence, Huawei Canada, Sogol Masoumzadeh Mcgill University, Kirill Vasilevski Huawei Canada, Dayi Lin Centre for Software Excellence, Huawei Canada, Ahmed E. Hassan Queen’s University | ||
12:10 10mTalk | Understanding Software Engineering Agents: A Study of Thought-Action-Result Trajectories Research Papers Islem BOUZENIA University of Stuttgart, Michael Pradel CISPA Helmholtz Center for Information Security | ||
12:20 10mTalk | Triangle: Empowering Incident Triage with Multi-Agent Research Papers Zhaoyang Yu Tsinghua University, Aoyang Fang Chinese University of Hong Kong, Shenzhen, Minghua Ma Microsoft, Jaskaran Singh Walia Microsoft, Chaoyun Zhang Microsoft, Shu Chi Tsinghua University, Ze Li Microsoft Azure, Murali Chintalapati Microsoft Azure, Xuchao Zhang Microsoft, Rujia Wang Microsoft, Chetan Bansal Microsoft Research, Saravan Rajmohan Microsoft, Qingwei Lin Microsoft, Shenglin Zhang Nankai University, Dan Pei Tsinghua University, Pinjia He Chinese University of Hong Kong, Shenzhen | ||