Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

Deep learning (DL) techniques are gaining more and more attention in the software engineering community. They have been used to support several code-related tasks, such as automatic bug fixing and code comments generation. Recent studies in the Natural Language Processing (NLP) field have shown that the Text-To-Text Transfer Transformer (T5) architecture can achieve state-of-the-art performance for a variety of NLP tasks. The basic idea behind T5 is to first pre-train a model on a large and generic dataset using a self-supervised task ( e.g: filling masked words in sentences). Once the model is pre-trained, it is fine-tuned on smaller and specialized datasets, each one related to a specific task ( e.g: language translation, sentence classification). In this paper, we empirically investigate how the T5 model performs when pre-trained and fine-tuned to support code-related tasks. We pre-train a T5 model on a dataset composed of natural language English text and source code. Then, we fine-tune such a model by reusing datasets used in four previous works that used DL techniques to: (i) fix bugs, (ii) inject code mutants, (iii) generate assert statements, and (iv) generate code comments. We compared the performance of this single model with the results reported in the four original papers proposing DL-based solutions for those four tasks. We show that our T5 model, exploiting additional data for the self-supervised pre-training phase, can achieve performance improvements over the four baselines.

Thu 27 May
Times are displayed in time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:00 - 11:00
3.1.2. Deep Neural Networks: Supporting SE Tasks #2SEIP - Software Engineering in Practice / Journal-First Papers / Technical Track at Blended Sessions Room 2 +12h
Chair(s): Sira VegasUniversidad Politecnica de Madrid
10:00
20m
Paper
NNStreamer: Efficient and Agile Development of On-Device AI SystemsSEIP
SEIP - Software Engineering in Practice
MyungJoo HamSamsung Electronics, Jijoong MoonSamsung Electronics, Geunsik LimSamsung Electronics, Jaeyun JungSamsung Electronics, Hyoungjoo AhnSamsung Electronics, Wook SongSamsung Electronics, Sangjung WooSamsung Electronics, Parichay KapoorSamsung Electronics, Dongju ChaeSamsung Electronics, Gichan JangSamsung Electronics, Yongjoo AhnSamsung Electronics, Jihoon LeeSamsung Electronics
Pre-print
10:20
20m
Paper
Deep Learning Based Program Generation from Requirements Text: Are We There Yet?Journal-First
Journal-First Papers
Hui LiuBeijing Institute of Technology, Mingzhu ShenBeijing Institute of Technology, Jiaqi ZhuBeijing Institute of Technology, Nan NiuUniversity of Cincinnati, Ge LiPeking University, Lu ZhangPeking University, China
Link to publication DOI Pre-print
10:40
20m
Paper
Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related TasksTechnical Track
Technical Track
Antonio MastropaoloUniversità della Svizzera italiana, Simone ScalabrinoUniversity of Molise, Nathan CooperWilliam & Mary, David Nader PalacioWilliam and Mary, Denys PoshyvanykCollege of William & Mary, Rocco OlivetoUniversity of Molise, Gabriele BavotaSoftware Institute, USI Università della Svizzera italiana
Pre-print
22:00 - 23:00
22:00
20m
Paper
NNStreamer: Efficient and Agile Development of On-Device AI SystemsSEIP
SEIP - Software Engineering in Practice
MyungJoo HamSamsung Electronics, Jijoong MoonSamsung Electronics, Geunsik LimSamsung Electronics, Jaeyun JungSamsung Electronics, Hyoungjoo AhnSamsung Electronics, Wook SongSamsung Electronics, Sangjung WooSamsung Electronics, Parichay KapoorSamsung Electronics, Dongju ChaeSamsung Electronics, Gichan JangSamsung Electronics, Yongjoo AhnSamsung Electronics, Jihoon LeeSamsung Electronics
Pre-print
22:20
20m
Paper
Deep Learning Based Program Generation from Requirements Text: Are We There Yet?Journal-First
Journal-First Papers
Hui LiuBeijing Institute of Technology, Mingzhu ShenBeijing Institute of Technology, Jiaqi ZhuBeijing Institute of Technology, Nan NiuUniversity of Cincinnati, Ge LiPeking University, Lu ZhangPeking University, China
Link to publication DOI Pre-print
22:40
20m
Paper
Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related TasksTechnical Track
Technical Track
Antonio MastropaoloUniversità della Svizzera italiana, Simone ScalabrinoUniversity of Molise, Nathan CooperWilliam & Mary, David Nader PalacioWilliam and Mary, Denys PoshyvanykCollege of William & Mary, Rocco OlivetoUniversity of Molise, Gabriele BavotaSoftware Institute, USI Università della Svizzera italiana
Pre-print
Hide past events

Information for Participants
Info for Blended Sessions Room 2: