FORGE 2024
Sun 14 Apr 2024 Lisbon, Portugal
co-located with ICSE 2024

Does the training of large language models potentially infringe upon code licenses? Furthermore, are there any datasets available that can be safely used for training these models without violating such licenses? In our study, we assess the current trends in the field and the importance of incorporating code into the training of large language models. Additionally, we examine publicly available datasets to see whether these models can be trained on them without the risk of legal issues in the future. To accomplish this, we compiled a list of 53 large language models trained on code. We then extracted their datasets and analyzed how much they overlap with a dataset we created, consisting exclusively of strong copyleft code.

Our analysis revealed that every dataset we examined contained license violations, despite being selected based on their associated repository licenses. We analyzed a total of 514 million code files, discovering 38 million exact duplicates present in our strong copyleft dataset. Additionally, we examined 171 million file-leading comments, identifying 16 million with strong copyleft licenses and another 11 million comments that discouraged copying without explicitly mentioning a license. Based on the findings of your study, which highlight the pervasive issue of license violations in large language models trained on code, our recommendation for both researchers and the community is to prioritize the development and adoption of best practices for dataset creation and management.

Sun 14 Apr

Displayed time zone: Lisbon change

14:00 - 15:30
Keynote 2 & Properties of Foundation ModelsResearch Track / Keynotes at Luis de Freitas Branco
Chair(s): David Lo Singapore Management University, Feifei Niu University of Ottawa
14:00
40m
Keynote
Keynote 2: Towards an Interpretable Science of Deep Learning for Software Engineering: A Causal Inference View
Keynotes
Denys Poshyvanyk William & Mary
14:40
14m
Full-paper
Exploring the Impact of the Output Format on the Evaluation of Large Language Models for Code TranslationFull Paper
Research Track
Marcos Macedo Queen's University, Kingston, Ontario, Yuan Tian Queen's University, Kingston, Ontario, Filipe Cogo Centre for Software Excellence, Huawei Canada, Bram Adams Queen's University
Pre-print
14:54
7m
Short-paper
Is Attention All You Need? Toward a Conceptual Model for Social Awareness in Large Language ModelsNew Idea Paper
Research Track
Gianmario Voria University of Salerno, Gemma Catolino University of Salerno, Fabio Palomba University of Salerno
Pre-print
15:01
14m
Full-paper
An Exploratory Investigation into Code License Infringements in Large Language Model Training DatasetsFull Paper
Research Track
Jonathan Katzy Delft University of Technology, Răzvan Mihai Popescu Delft University of Technology, Arie van Deursen Delft University of Technology, Maliheh Izadi Delft University of Technology
15:15
15m
Other
Discussion
Research Track