Write a Blog >>
MSR 2022
Mon 23 - Tue 24 May 2022
co-located with ICSE 2022

Deep Learning (DL) models have been widely used to support code completion. These models, once properly trained, can take as input an incomplete code component (e.g., an incomplete function) and predict the missing tokens to finalize it. GitHub Copilot is an example of code recommender built by training a DL model on millions of open source repositories: The source code of these repositories acts as training data, allowing the model to learn “how to program”. The usage of such a code is usually regulated by Free and Open Source Software (FOSS) licenses, that establish under which conditions the licensed code can be redistributed or modified. As of Today, it is unclear whether the code generated by DL models trained on open source code should be considered as new'' or asderivative'' work, with possible implications on license infringements. In this work, we run a large-scale study investigating the extent to which DL models tend to clone code from their training set when recommending code completions. Such an exploratory study can help in assessing the magnitude of the potential licensing issues mentioned before: If these models tend to generate new code that is unseen in the training set, then licensing issues are unlikely to occur. Otherwise, a revision of these licenses urges to regulate how the code generated by these models should be treated when used, for example, in a commercial setting. Highlights from our results show that ~10% to ~0.1% of the predictions generated by a state-of-the-art DL-based code completion tool are Type-1 clones of instances in the training set, depending on the size of the predicted code. Long predictions are unlikely to be cloned.

Wed 18 May

Displayed time zone: Eastern Time (US & Canada) change

13:00 - 13:50
Session 4: Software Quality (Bugs & Smells)Data and Tool Showcase Track / Technical Papers at MSR Main room - odd hours
Chair(s): Maxime Lamothe Polytechnique Montreal, Montreal, Canada, Mahmoud Alfadel University of Waterloo
13:00
7m
Talk
Dazzle: Using Optimized Generative Adversarial Networks to Address Security Data Class Imbalance Issue
Technical Papers
Rui Shu North Carolina State University, Tianpei Xia North Carolina State University, Laurie Williams North Carolina State University, Tim Menzies North Carolina State University
13:07
7m
Talk
To What Extent do Deep Learning-based Code Recommenders Generate Predictions by Cloning Code from the Training Set?
Technical Papers
Matteo Ciniselli Università della Svizzera Italiana, Luca Pascarella Università della Svizzera italiana (USI), Gabriele Bavota Software Institute, USI Università della Svizzera italiana
Pre-print
13:14
7m
Talk
How to Improve Deep Learning for Software Analytics (a case study with code smell detection)
Technical Papers
Rahul Yedida , Tim Menzies North Carolina State University
Pre-print
13:21
7m
Talk
Using Active Learning to Find High-Fidelity Builds
Technical Papers
Harshitha Menon Lawrence Livermore National Lab, Konstantinos Parasyris Lawrence Livermore National Laboratory, Todd Gamblin Lawrence Livermore National Laboratory, Tom Scogland Lawrence Livermore National Laboratory
Pre-print
13:28
4m
Talk
ApacheJIT: A Large Dataset for Just-In-Time Defect Prediction
Data and Tool Showcase Track
Hossein Keshavarz David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada, Mei Nagappan University of Waterloo
Pre-print
13:32
4m
Talk
ReCover: a Curated Dataset for Regression Testing Research
Data and Tool Showcase Track
Francesco Altiero Università degli Studi di Napoli Federico II, Anna Corazza Università degli Studi di Napoli Federico II, Sergio Di Martino Università degli Studi di Napoli Federico II, Adriano Peron Università degli Studi di Napoli Federico II, Luigi Libero Lucio Starace Università degli Studi di Napoli Federico II
13:36
14m
Live Q&A
Discussions and Q&A
Technical Papers


Information for Participants