CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences
Wed 11 May 2022 11:05 - 11:10 at ICSE room 3-odd hours - Search-Based Software Engineering 3 Chair(s): Mohamed Wiem Mkaouer
Code completion is an essential feature of IDEs, yet current autocompleters are restricted to either grammar-based or NLP-based single token completions. Both approaches have significant drawbacks: grammar-based autocompletion is very restricted in dynamically-typed language scenarios, whereas NLP-based autocompletion struggles to understand the semantics of the programming language, giving suggestions that ignore a developer’s context.
In this work, we present CodeFill, a language model for autocompletion that combines structure and naming information. Using a parallel Transformer architecture and Multi-Task learning, CodeFill consumes sequences of source code token names and their equivalent AST token types. Uniquely, CodeFill is trained both for single-token and multi-token (statement) prediction, which enables it to learn long-range dependencies among grammatical and naming elements. We train CodeFill on two datasets, consisting of 29M and 425M lines of code respectively. To make the evaluation more realistic, we develop a method to automatically infer points in the source code at which completion matters. We compare CodeFill against four baselines and two state-of-the-art models, GPT-C and TravTrans+. CodeFill surpasses all baselines in single token prediction (MRR: 70.9% vs. 66.2% and 67.8%) and significantly outperforms the state of the art for multi-token prediction (ROUGE-L: 63.7% vs. 52.4% and 59.2%, for n=4 tokens). We publicly release our source code and data for replication and use.