Transformer-based Language Models (LM) for automatic code completion have shown great promise so far, yet the evaluation of these models rarely uses real data. This study provides both quantitative and qualitative assessments of three public code LMs when completing real-world code. We first developed an open-source IDE extension, CodeAssist, for the online evaluation of the models. We collected real auto-completion usage data for over a year from over 1,200 users, resulting in 2M completions. These models were then evaluated using six standard metrics across twelve programming languages. Next, we conducted a qualitative study of 1690 actual completion requests to identify the reasons behind the poor model performance. A comparative analysis of the models’ performance in online and offline settings was also performed, using benchmark synthetic datasets and two masking strategies.
Our findings suggest that while developers utilize code completion across various languages, the best results are achieved for mainstream languages such as Python and Java. InCoder outperformed the other models across all programming languages, highlighting the significance of training data and objectives. Our study also revealed that offline evaluations do not accurately reflect real-world scenarios. Upon qualitative analysis of the models’ predictions, we found that 66.3% of failures were due to models’ limitations, 24.4% occurred due to inappropriate model usage in a development context, and 9.3% were valid requests that developers overwrote. Given these findings, we propose several strategies to overcome the current limitations. These include refining training objectives, improving resilience to typographical errors, adopting hybrid approaches, and enhancing implementations and usability.
Fri 19 AprDisplayed time zone: Lisbon change
16:00 - 17:30 | Language Models and Generated Code 4New Ideas and Emerging Results / Research Track at Almada Negreiros Chair(s): Shin Yoo Korea Advanced Institute of Science and Technology | ||
16:00 15mTalk | Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code Research Track Rangeet Pan IBM Research, Ali Reza Ibrahimzada University of Illinois Urbana-Champaign, Rahul Krishna IBM Research, Divya Sankar IBM Research, Lambert Pouguem Wassi IBM Research, Michele Merler IBM Research, Boris Sobolev IBM Research, Raju Pavuluri IBM T.J. Watson Research Center, Saurabh Sinha IBM Research, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign DOI Pre-print Media Attached | ||
16:15 15mTalk | Traces of Memorisation in Large Language Models for Code Research Track Ali Al-Kaswan Delft University of Technology, Netherlands, Maliheh Izadi Delft University of Technology, Arie van Deursen Delft University of Technology Pre-print | ||
16:30 15mTalk | Language Models for Code Completion: A Practical Evaluation Research Track Maliheh Izadi Delft University of Technology, Jonathan Katzy Delft University of Technology, Tim van Dam Delft University of Technology, Marc Otten Delft University of Technology, Răzvan Mihai Popescu Delft University of Technology, Arie van Deursen Delft University of Technology Pre-print | ||
16:45 15mTalk | Evaluating Large Language Models in Class-Level Code Generation Research Track Xueying Du Fudan University, Mingwei Liu Fudan University, Kaixin Wang Fudan University, Hanlin Wang Fudan University, Junwei Liu Huazhong University of Science and Technology, Yixuan Chen Fudan University, Jiayi Feng Fudan University, Chaofeng Sha Fudan University, Xin Peng Fudan University, Yiling Lou Fudan University Pre-print | ||
17:00 7mTalk | Naturalness of Attention: Revisiting Attention in Code Language Models New Ideas and Emerging Results Pre-print | ||
17:07 7mTalk | Towards Trustworthy AI Software Development Assistance New Ideas and Emerging Results DOI Pre-print |