Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study
Abstract—Transformer-based pre-trained models have recently achieved great results in solving many software engineering tasks including automatic code completion which is a staple in a developer’s toolkit. While many have striven to improve the code- understanding abilities of such models, the opposite – making the code easier to understand – has not been properly investigated. In this study, we aim to answer whether making code easier to understand through using contextual data improves the perfor- mance of pre-trained code language models for the task of code completion. We consider type annotations and comments as two common forms of additional contextual information that often help developers understand code better. For the experiments, we study code completion in two granularity levels; token and line completion and take three recent and large-scale language models for source code: UniXcoder, CodeGPT, and InCoder with five evaluation metrics. Finally, we perform the Wilcoxon Signed Rank test to gauge significance and measure the effect size. Contrary to our expectations, all models perform better if type annotations are removed (albeit the effect sizes are small). For comments, we find that the models perform better in the presence of multi-line comments (again with small effect sizes). Based on our observations, we recommend making proper design choices when training, fine-tuning, or simply selecting such models given the intended data and application. Better evaluations and multi- modal techniques can also be further investigated to improve the practicality and accuracy of auto-completions.
Mon 15 MayDisplayed time zone: Hobart change
14:20 - 15:15 | Language ModelsTechnical Papers at Meeting Room 109 Chair(s): Patanamon Thongtanunam University of Melbourne | ||
14:20 12mTalk | On Codex Prompt Engineering for OCL Generation: An Empirical Study Technical Papers Seif Abukhalaf Polytechnique Montreal, Mohammad Hamdaqa Polytechnique Montréal, Foutse Khomh Polytechnique Montréal | ||
14:32 12mTalk | Cross-Domain Evaluation of a Deep Learning-Based Type Inference System Technical Papers Bernd Gruner DLR Institute of Data Science, Tim Sonnekalb German Aerospace Center (DLR), Thomas S. Heinze Cooperative University Gera-Eisenach, Clemens-Alexander Brust German Aerospace Center (DLR) | ||
14:44 12mTalk | Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study Technical Papers Tim van Dam Delft University of Technology, Maliheh Izadi Delft University of Technology, Arie van Deursen Delft University of Technology Pre-print | ||
14:56 12mTalk | Model-Agnostic Syntactical Information for Pre-Trained Programming Language Models Technical Papers Iman Saberi University of British Columbia Okanagan, Fatemeh Hendijani Fard University of British Columbia |