SANER 2025
Tue 4 - Fri 7 March 2025 Montréal, Québec, Canada

GitHub introduced Issue Report Templates to streamline issue management and provide essential information for developers. However, many projects still face a high volume of non-template issue reports, which are often difficult to understand and challenging for users to properly complete using the templates. This paper aimed to explore better solutions through an empirical analysis of Transformer models for automatically templating issue reports. We examined the impact of different architectures and hyperparameters on the performance of Transformer models in handling automatic templating tasks via extensive experiments. Additionally, we evaluated the performance of GPT-3.5 and Claude-3 on this task using various prompts. Our results indicate that models based on the Transformer architecture outperform baseline models. Specifically, the BERT model achieved the best performance with an accuracy of 0.863 and an F1 score of 0.857. We reported on how the performance of different models is influenced by variations in hyperparameters. Moreover, generative large language models like GPT-3.5 and Claude-3 may not be suitable for direct application in this automatic templating problem. Our findings highlight the efficacy of Transformer models for automatic templating tasks and encourage researchers to investigate more advanced approaches for understanding and analyzing issue reports.