Automating Code-Related Tasks Through Transformers: The Impact of Pre-training
Fri 19 May 2023 16:00 - 16:15 at Meeting Room 103 - Pre-trained and few shot learning for SE Chair(s): Yiling Lou
Transformers have gained popularity in the software engineering (SE) literature. These deep learning models are usually pre-trained through a self-supervised objective, meant to provide the model with basic knowledge about a language of interest (e.g., Java). A classic pre-training objective is the masked language model (MLM), in which a percentage of tokens from the input (e.g., a Java method) is masked, with the model in charge of predicting them. Once pre-trained, the model is then fine-tuned to support the specific downstream task of interest (e.g., code summarization). While there is evidence suggesting the boost in performance provided by pre-training, little is known about the impact of the specific pre-training objective(s) used. Indeed, MLM is just one of the possible pre-training objectives, and recent work from the natural language processing field suggests that pre-training objectives tailored for the specific downstream task of interest may substantially boost the model’s performance. For example, in the case of code summarization, a tailored pre-training objective could be the identification of an appropriate name for a given method, considering the method name to generate as an extreme summary. In this study, we focus on the impact of pre-training objectives on the performance of transformers when automating code-related tasks. We start with a systematic literature review aimed at identifying the pre-training objectives used in SE. Then, we pre-train 30 transformers using both (i) generic pre-training objectives usually adopted in SE; and (ii) pre-training objectives tailored to specific code-related tasks subject of our experimentation, namely bug-fixing, code summarization, and code completion. We also compare the pre-trained models with non-pretrained ones and show the advantage brought by pre-training in different scenarios in which more or less fine-tuning data are available. Our results show that: (i) pre-training helps in boosting performance only if the amount of fine-tuning data available is small; (ii) the MLM objective is usually sufficient to maximize the prediction performance of the model, even when comparing it with pre-training objectives specialized for the downstream task at hand.
Wed 17 MayDisplayed time zone: Hobart change
Fri 19 MayDisplayed time zone: Hobart change
15:45 - 17:15 | Pre-trained and few shot learning for SETechnical Track / Journal-First Papers at Meeting Room 103 Chair(s): Yiling Lou Fudan University | ||
15:45 15mTalk | On the validity of pre-trained transformers for natural language processing in the software engineering domain Journal-First Papers Alexander Trautsch University of Passau, Julian von der Mosel , Steffen Herbold University of Passau | ||
16:00 15mTalk | Automating Code-Related Tasks Through Transformers: The Impact of Pre-training Technical Track Rosalia Tufano Università della Svizzera Italiana, Luca Pascarella ETH Zurich, Gabriele Bavota Software Institute, USI Università della Svizzera italiana | ||
16:15 15mTalk | Log Parsing with Prompt-based Few-shot Learning Technical Track Pre-print | ||
16:30 15mTalk | Retrieval-Based Prompt Selection for Code-Related Few-Shot Learning Technical Track Noor Nashid University of British Columbia, Mifta Sintaha University of British Columbia, Ali Mesbah University of British Columbia (UBC) Pre-print | ||
16:45 15mPaper | An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry Technical Track Wenxin Jiang Purdue University, Nicholas Synovic Loyola University Chicago, Matt Hyatt Loyola University Chicago, Taylor R. Schorlemmer Purdue University, Rohan Sethi Loyola University Chicago, Yung-Hsiang Lu Purdue University, George K. Thiruvathukal Loyola University Chicago and Argonne National Laboratory, James C. Davis Purdue University Pre-print | ||
17:00 15mTalk | ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning Technical Track Shangqing Liu Nanyang Technological University, bozhi wu Nanyang Technological University, Xiaofei Xie Singapore Management University, Guozhu Meng Institute of Information Engineering, Chinese Academy of Sciences, Yang Liu Nanyang Technological University |