An Empirical Study of Transformer Models on Automatically Templating GitHub Issue Reports
GitHub introduced Issue Report Templates to streamline issue management and provide essential information for developers. However, many projects still face a high volume of non-template issue reports, which are often difficult to understand and challenging for users to properly complete using the templates. This paper aimed to explore better solutions through an empirical analysis of Transformer models for automatically templating issue reports. We examined the impact of different architectures and hyperparameters on the performance of Transformer models in handling automatic templating tasks via extensive experiments. Additionally, we evaluated the performance of GPT-3.5 and Claude-3 on this task using various prompts. Our results indicate that models based on the Transformer architecture outperform baseline models. Specifically, the BERT model achieved the best performance with an accuracy of 0.863 and an F1 score of 0.857. We reported on how the performance of different models is influenced by variations in hyperparameters. Moreover, generative large language models like GPT-3.5 and Claude-3 may not be suitable for direct application in this automatic templating problem. Our findings highlight the efficacy of Transformer models for automatic templating tasks and encourage researchers to investigate more advanced approaches for understanding and analyzing issue reports.
Fri 7 MarDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | Mining Software RepositoriesResearch Papers / Early Research Achievement (ERA) Track / Journal First Track / Reproducibility Studies and Negative Results (RENE) Track at L-1720 Chair(s): Brittany Reid Nara Institute of Science and Technology | ||
11:00 15mTalk | An Empirical Study of Transformer Models on Automatically Templating GitHub Issue Reports Research Papers Jin Zhang Hunan Normal University, Maoqi Peng Hunan Normal University, Yang Zhang National University of Defense Technology, China | ||
11:15 15mTalk | How to Select Pre-Trained Code Models for Reuse? A Learning Perspective Research Papers Zhangqian Bi Huazhong University of Science and Technology, Yao Wan Huazhong University of Science and Technology, Zhaoyang Chu Huazhong University of Science and Technology, Yufei Hu Huazhong University of Science and Technology, Junyi Zhang Huazhong University of Science and Technology, Hongyu Zhang Chongqing University, Guandong Xu University of Technology, Hai Jin Huazhong University of Science and Technology Pre-print | ||
11:30 7mTalk | Uncovering the Challenges: A Study of Corner Cases in Bug-Inducing Commits Early Research Achievement (ERA) Track | ||
11:37 15mTalk | A Bot Identification Model and Tool Based on GitHub Activity Sequences Journal First Track Natarajan Chidambaram University of Mons, Alexandre Decan University of Mons; F.R.S.-FNRS, Tom Mens University of Mons | ||
11:52 15mTalk | Does the Tool Matter? Exploring Some Causes of Threats to Validity in Mining Software Repositories Reproducibility Studies and Negative Results (RENE) Track Nicole Hoess Technical University of Applied Sciences Regensburg, Carlos Paradis No Affiliation, Rick Kazman University of Hawai‘i at Mānoa, Wolfgang Mauerer Technical University of Applied Sciences Regensburg |