Write a Blog >>
ICSE 2020
Mon 5 - Sun 11 October 2020 Yongsan-gu, Seoul, South Korea
Mon 5 Oct 2020 14:40 - 14:55 at TBD6 - Debugging 2

Developers usually depend on inserting logging statements into the source code to collect system runtime information. Such logged information is valuable for software maintenance. A logging statement usually prints one or more variables to record vital system status. However, due to the lack of rigorous logging guidance and the requirement of domain-specific knowledge, it is not easy for developers to make proper decisions about which variables to log. To address this need, in this paper [1], we propose an approach to recommend logging variables for developers during development by learning from existing logging statements. Different from other prediction tasks in software engineering, this task has two challenges: 1) Dynamic labels – different logging statements have different sets of accessible variables, which means in this task, the set of possible labels of each sample is not the same. 2) Out-of-vocabulary words – identifiers’ names are not limited to natural language words and the test set usually contains a number of program tokens which are out of the vocabulary built from the training set and cannot be appropriately mapped to word embeddings. To deal with the first challenge, we convert this task into a representation learning problem instead of a multi-label classification problem. Given a code snippet which lacks a logging statement, our approach first leverages a neural network with an RNN (recurrent neural network) layer and a self-attention layer to learn the proper representation of each program token, and then predicts whether each token should be logged through a unified binary classifier based on the learned representation. To handle the second challenge, we propose a novel method to map program tokens into word embeddings by making use of the pre-trained word embeddings of natural language tokens. We evaluate our approach on 9 large and high-quality Java projects. Our evaluation results show that the average MAP of our approach is over 0.84, outperforming random guess and an information-retrieval-based method by large margins.

Mon 5 Oct

icse-2020-paper-presentations
14:00 - 15:40: Paper Presentations - Debugging 2 at TBD6
icse-2020-papers14:00 - 14:20
Talk
Boyuan ChenYork University, Zhen Ming (Jack) JiangYork University
Authorizer link Pre-print
icse-2020-Software-Engineering-in-Practice14:20 - 14:40
Talk
Jinhan Kim, Valeriy SavchenkoIvannikov Institute for System Programming of the RAS, Kihyuck ShinSamsung Electronics, Konstantin SorokinIvannikov Institute for System Programming of the RAS, Hyunseok JeonSamsung Electronics, Georgiy PankratenkoIvannikov Institute for System Programming of the RAS, Sergey MarkovIvannikov Institute for System Programming of the RAS, Chul-Joo KimSamsung Electronics
icse-2020-Journal-First14:40 - 14:55
Talk
Zhongxin LiuZhejiang University, Xin XiaMonash University, David LoSingapore Management University, Zhenchang XingAustralia National University, Ahmed E. HassanQueen's University, Shanping LiZhejiang University
icse-2020-Journal-First14:55 - 15:10
Talk
Xuan HuoNanjing University, Ferdian ThungSingapore Management University, Ming LiNanjing University, David LoSingapore Management University, Shu-Ting ShiNanjing University
icse-2020-Journal-First15:10 - 15:25
Talk
Mozhan SoltaniLeiden University, Pouria DerakhshanfarDelft University of Technology, Xavier DevroeyDelft University of Technology, Arie van DeursenDelft University of Technology
Link to publication DOI Pre-print
icse-2020-Journal-First15:25 - 15:40
Talk
Yi ZengConcordia University, Jinfu ChenConcordia University, Canada, Weiyi ShangConcordia University, Tse-Hsun (Peter) ChenConcordia University