Multi-task Learning based Pre-trained Language Model for Code Completion
Code completion is one of the most useful features in the Integrated Development Environments (IDEs), which can accelerate software development by suggesting the next probable token based on the contextual code in real-time. Recent studies have shown that statistical language modeling techniques can improve the performance of code completion tools through learning from large-scale software repositories. However, these models suffer from two major drawbacks: a) Existing research uses static embeddings, which map a word to the same vector regardless of its context. The differences in the meaning of a token in varying contexts are lost when each token is associated with a single representation; b) Existing LM-based code completion models perform poor on completing identifiers, and the type information of the identifiers is ignored in most of these models. To address these challenges, in this paper, we develop a multi-task learning based pre-trained language model for code understanding and code generation with a Transformer-based neural architecture. We pre-train it with hybrid objective functions that incorporate both code understanding and code generation tasks. Then we fine-tune the pre-trained model on code completion. During the completion, our model does not directly predict the next token. Instead, we adopt multi-task learning to predict the token and its type jointly and utilize the predicted type to assist the token prediction. Experiments results on two real-world datasets demonstrate the effectiveness of our model when compared with state-of-the-art methods.
Wed 23 SepDisplayed time zone: (UTC) Coordinated Universal Time change
01:10 - 02:10 | Recommender Systems for Software EngineeringResearch Papers / Tool Demonstrations at Koala Chair(s): Shaowei Wang Mississippi State University | ||
01:10 20mTalk | API-Misuse Detection Driven by Fine-Grained API-Constraint Knowledge Graph Research Papers Xiaoxue Ren Zhejiang University, Xinyuan Ye Australian National University, Zhenchang Xing Australian National University, Australia, Xin Xia Monash University, Xiwei (Sherry) Xu Data61 at CSIRO, Australia, Liming Zhu Data61 at CSIRO, Australia / UNSW, Australia, JianLing Sun Zhejiang University Pre-print | ||
01:30 20mTalk | Multi-task Learning based Pre-trained Language Model for Code Completion Research Papers Fang Liu Peking University, Ge Li Peking University, Yunfei Zhao Peking University, Zhi Jin Peking University | ||
01:50 10mTalk | HomoTR: Online Test Recommendation System Based on Homologous Code Matching Tool Demonstrations Chenqian Zhu Nanjing University, Weisong Sun State Key Laboratory for Novel Software Technology, Nanjing University, Qin LIU , Yangyang Yuan Nanjing University, Chunrong Fang Nanjing University, China, Yong Huang State Key Laboratory for Novel Software Technology, Nanjing University |