Write a Blog >>
ASE 2021
Sun 14 - Sat 20 November 2021 Australia
Wed 17 Nov 2021 09:40 - 09:50 at Kangaroo - Learning I Chair(s): Denys Poshyvanyk

Pre-trained models of code built on the transformer architecture have performed well on many software engineering (SE) tasks, including predictive code generation. However, whether the vector representations from these pre-trained models comprehensively encode characteristics of source code well enough to be applicable to a broad spectrum of downstream tasks remains an open question.

One way to investigate this is with diagnostic tasks called probes. In this paper, we construct four probing tasks (probing for surface-level, syntactic, structural, and semantic information) for pre-trained code models. We show how probes can be used to identify whether models are deficient in (understanding) certain code properties, characterize different model layers, and get insight into the model sample-efficiency that may be necessary for each type of task.

We probe four models that vary in their expected knowledge of code properties: BERT (pre-trained on English), CodeBERT and CodeBERTa (pre-trained on source code, and natural language documentation), and GraphCodeBERT (pre-trained on source code with dataflow). While GraphCodeBERT performs more consistently overall, we find that BERT performs surprisingly well on some code tasks, which calls for further investigation. We release all the task datasets and evaluation code publicly.

Wed 17 Nov

Displayed time zone: Hobart change

09:00 - 10:00
Learning INIER track / Research Papers / Tool Demonstrations at Kangaroo
Chair(s): Denys Poshyvanyk William and Mary
09:00
20m
Talk
DeepMetis: Augmenting a Deep Learning Test Set to Increase its Mutation Score
Research Papers
Vincenzo Riccio USI Lugano, Nargiz Humbatova Università della Svizzera Italiana (USI), Gunel Jahangirova USI Lugano, Paolo Tonella USI Lugano
09:20
20m
Talk
Efficient state synchronisation in model-based testing through reinforcement learning
Research Papers
Uraz Cengiz Türker University of Leicester, UK, Robert Hierons University of Sheffield, Mohammad Reza Mousavi King's College London, Ivan Tyukin University of Leicester
09:40
10m
Talk
What do pre-trained code models know about code?
NIER track
Anjan Karmakar Free University of Bozen-Bolzano, Romain Robbes
09:50
5m
Talk
DEVIATE: A Deep Learning Variance Testing Framework
Tool Demonstrations
Hung Viet Pham University of Waterloo, Mijung Kim Purdue University, Lin Tan Purdue University, Yaoliang Yu University of Waterloo, Nachiappan Nagappan Microsoft Research