Write a Blog >>
ICSE 2023
Sun 14 - Sat 20 May 2023 Melbourne, Australia
Wed 17 May 2023 15:30 - 15:32 at Meeting Room 105 - Posters 1
Fri 19 May 2023 16:00 - 16:15 at Meeting Room 103 - Pre-trained and few shot learning for SE Chair(s): Yiling Lou

Transformers have gained popularity in the software engineering (SE) literature. These deep learning models are usually pre-trained through a self-supervised objective, meant to provide the model with basic knowledge about a language of interest (e.g., Java). A classic pre-training objective is the masked language model (MLM), in which a percentage of tokens from the input (e.g., a Java method) is masked, with the model in charge of predicting them. Once pre-trained, the model is then fine-tuned to support the specific downstream task of interest (e.g., code summarization). While there is evidence suggesting the boost in performance provided by pre-training, little is known about the impact of the specific pre-training objective(s) used. Indeed, MLM is just one of the possible pre-training objectives, and recent work from the natural language processing field suggests that pre-training objectives tailored for the specific downstream task of interest may substantially boost the model’s performance. For example, in the case of code summarization, a tailored pre-training objective could be the identification of an appropriate name for a given method, considering the method name to generate as an extreme summary. In this study, we focus on the impact of pre-training objectives on the performance of transformers when automating code-related tasks. We start with a systematic literature review aimed at identifying the pre-training objectives used in SE. Then, we pre-train 30 transformers using both (i) generic pre-training objectives usually adopted in SE; and (ii) pre-training objectives tailored to specific code-related tasks subject of our experimentation, namely bug-fixing, code summarization, and code completion. We also compare the pre-trained models with non-pretrained ones and show the advantage brought by pre-training in different scenarios in which more or less fine-tuning data are available. Our results show that: (i) pre-training helps in boosting performance only if the amount of fine-tuning data available is small; (ii) the MLM objective is usually sufficient to maximize the prediction performance of the model, even when comparing it with pre-training objectives specialized for the downstream task at hand.

Wed 17 May

Displayed time zone: Hobart change

15:15 - 15:45
15:15
2m
Poster
Distribution-aware Fairness Test Generation
Posters
Sai Sathiesh Rajan Singapore University of Technology and Design, Singapore, Ezekiel Soremekun Royal Holloway, University of London, Sudipta Chattopadhyay Singapore University of Technology and Design, Yves Le Traon University of Luxembourg, Luxembourg
15:17
2m
Talk
Improving API Knowledge Discovery with ML: A Case Study of Comparable API Methods
Technical Track
Daye Nam Carnegie Mellon University, Brad A. Myers Carnegie Mellon University, Bogdan Vasilescu Carnegie Mellon University, Vincent J. Hellendoorn Carnegie Mellon University
Pre-print
15:19
2m
Talk
Diver: Oracle-Guided SMT Solver Testing with Unrestricted Random Mutations
Technical Track
Jongwook Kim Korea University, Sunbeom So Korea University, Hakjoo Oh Korea University
15:21
2m
Talk
Demystifying Exploitable Bugs in Smart Contracts
Technical Track
Zhuo Zhang Purdue University, Brian Zhang Harrison High School (Tippecanoe), Wen Xu PNM Labs, Zhiqiang Lin The Ohio State University
Pre-print
15:23
2m
Talk
An Empirical Study of Deep Learning Models for Vulnerability Detection
Technical Track
Benjamin Steenhoek Iowa State University, Md Mahbubur Rahman Iowa State University, Richard Jiles Iowa State University, Wei Le Iowa State University
Pre-print
15:25
2m
Talk
MorphQ: Metamorphic Testing of the Qiskit Quantum Computing Platform
Technical Track
Matteo Paltenghi University of Stuttgart, Germany, Michael Pradel University of Stuttgart
Pre-print
15:27
2m
Talk
Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction
Technical Track
Sungmin Kang KAIST, Juyeon Yoon Korea Advanced Institute of Science and Technology, Shin Yoo KAIST
Pre-print
15:30
2m
Talk
Automating Code-Related Tasks Through Transformers: The Impact of Pre-training
Technical Track
Rosalia Tufano Università della Svizzera Italiana, Luca Pascarella ETH Zurich, Gabriele Bavota Software Institute, USI Università della Svizzera italiana
15:32
2m
Talk
Generic Partition Refinement and Weighted Tree Automata
Showcase
Hans-Peter Deifel Friedrich-Alexander University Erlangen-Nürnberg, Germany, Stefan Milius , Lutz Schröder University of Erlangen-Nuremberg, Thorsten Wißmann Friedrich-Alexander University Erlangen-Nürnberg
Link to publication DOI Pre-print
15:34
2m
Talk
Learning Seed-Adaptive Mutation Strategies for Greybox Fuzzing
Technical Track
Myungho Lee Korea University, Sooyoung Cha Sungkyunkwan University, Hakjoo Oh Korea University
15:36
2m
Talk
Bug localization in game software engineering: evolving simulations to locate bugs in software models of video games
Showcase
Rodrigo Casamayor SVIT Research Group. Universidad San Jorge, Lorena Arcega San Jorge University, Francisca Pérez SVIT Research Group, Universidad San Jorge, Carlos Cetina San Jorge University, Spain
DOI
15:38
2m
Poster
Don't Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems
Posters
Zhensu Sun The Hong Kong Polytechnic University, Xiaoning Du Monash University, Australia, Fu Song ShanghaiTech University, Shangwen Wang National University of Defense Technology, Li Li Beihang University
15:40
2m
Talk
A Qualitative Study on the Implementation Design Decisions of DevelopersDistinguished Paper Award
Technical Track
Jenny T. Liang Carnegie Mellon University, Maryam Arab George Mason University, Minhyuk Ko Virginia Tech, Amy Ko University of Washington, Thomas LaToza George Mason University
Pre-print
15:42
2m
Poster
Closing the Loop for Software Remodularisation - REARRANGE: An Effort Estimation Approach for Software Clustering-based Remodularisation
Posters
Alvin Jian Jin Tan , Chun Yong Chong Monash University Malaysia, Aldeida Aleti Monash University

Fri 19 May

Displayed time zone: Hobart change

15:45 - 17:15
Pre-trained and few shot learning for SETechnical Track / Journal-First Papers at Meeting Room 103
Chair(s): Yiling Lou Fudan University
15:45
15m
Talk
On the validity of pre-trained transformers for natural language processing in the software engineering domain
Journal-First Papers
Alexander Trautsch University of Passau, Julian von der Mosel , Steffen Herbold University of Passau
16:00
15m
Talk
Automating Code-Related Tasks Through Transformers: The Impact of Pre-training
Technical Track
Rosalia Tufano Università della Svizzera Italiana, Luca Pascarella ETH Zurich, Gabriele Bavota Software Institute, USI Università della Svizzera italiana
16:15
15m
Talk
Log Parsing with Prompt-based Few-shot Learning
Technical Track
Van-Hoang Le The University of Newcastle, Hongyu Zhang The University of Newcastle
Pre-print
16:30
15m
Talk
Retrieval-Based Prompt Selection for Code-Related Few-Shot Learning
Technical Track
Noor Nashid University of British Columbia, Mifta Sintaha University of British Columbia, Ali Mesbah University of British Columbia (UBC)
Pre-print
16:45
15m
Paper
An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry
Technical Track
Wenxin Jiang Purdue University, Nicholas Synovic Loyola University Chicago, Matt Hyatt Loyola University Chicago, Taylor R. Schorlemmer Purdue University, Rohan Sethi Loyola University Chicago, Yung-Hsiang Lu Purdue University, George K. Thiruvathukal Loyola University Chicago and Argonne National Laboratory, James C. Davis Purdue University
Pre-print
17:00
15m
Talk
ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning
Technical Track
Shangqing Liu Nanyang Technological University, bozhi wu Nanyang Technological University, Xiaofei Xie Singapore Management University, Guozhu Meng Institute of Information Engineering, Chinese Academy of Sciences, Yang Liu Nanyang Technological University