A Study on Applying Large Language Models to Issue Classification
This program is tentative and subject to change.
Prompt-based large language models (LLMs) have demonstrated their ability to perform tasks with minimal or no additional training data. In the context of issue classification, researchers have actively explored the capabilities of LLMs in classifying issue reports. However, existing studies still face limitations in accuracy. This study replicates an LLM-based issue classification study using GPT-3.5 Turbo and explores variants, such as adopting different models like Llama 3.1 8B and GPT-4o. Experimental results show that the classifier fine-tuned with GPT-3.5 Turbo still yields the same accuracy as shown in the original research and that the classifier fine-tuned with Llama 3.1 8B (0.8004) yields an F1-score of 0.0535 lower than that of the classifier fine-tuned with GPT-3.5 Turbo (0.8539). On the other hand, the classifier with GPT-4o (0.8639) yields an average F1-score 0.01 higher than that of the classifier fine-tuned with GPT-3.5 Turbo (0.8539). Additionally, the project agnostic classifier fine-tuned with GPT-4o yields the highest F1-score of 0.8680. These findings contribute to advancing LLM-based issue classification by providing experimental insights into the accuracy of LLMs in this issue classification task.
This program is tentative and subject to change.
Sun 27 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | Vulnerabilities, Technical Debt, DefectsEarly Research Achievements (ERA) / Research Track / Replications and Negative Results (RENE) at 205 | ||
11:00 10mTalk | CalmDroid: Core-Set Based Active Learning for Multi-Label Android Malware Detection Research Track Minhong Dong Tiangong University, Liyuan Liu Tiangong University, Mengting Zhang Tiangong University, Sen Chen Tianjin University, Wenying He Hebei University of Technology, Ze Wang Tiangong University, Yude Bai Tianjin University | ||
11:10 10mTalk | Towards Task-Harmonious Vulnerability Assessment based on LLM Research Track Zaixing Zhang Southeast University, Chang Jianming , Tianyuan Hu Southeast University, Lulu Wang Southeast University, Bixin Li Southeast University | ||
11:20 10mTalk | Slicing-Based Approach for Detecting and Patching Vulnerable Code Clones Research Track Hakam W. Alomari Miami University, Christopher Vendome Miami University, Himal Gyawali Miami University | ||
11:30 6mTalk | Revisiting Security Practices for GitHub Actions Workflows Early Research Achievements (ERA) | ||
11:36 6mTalk | Leveraging multi-task learning to improve the detection of SATD and vulnerability Replications and Negative Results (RENE) Barbara Russo Free University of Bolzano, Jorge Melegati Free University of Bozen-Bolzano, Moritz Mock Free University of Bozen-Bolzano Pre-print | ||
11:42 10mTalk | Leveraging Context Information for Self-Admitted Technical Debt Detection Research Track Miki Yonekura Nara Institute of Science and Technology, Yutaro Kashiwa Nara Institute of Science and Technology, Bin Lin Radboud University, Kenji Fujiwara Nara Women’s University, Hajimu Iida Nara Institute of Science and Technology | ||
11:52 6mTalk | Personalized Code Readability Assessment: Are We There Yet? Replications and Negative Results (RENE) Antonio Vitale Politecnico di Torino, University of Molise, Emanuela Guglielmi University of Molise, Rocco Oliveto University of Molise, Simone Scalabrino University of Molise | ||
11:58 6mTalk | Automated Refactoring of Non-Idiomatic Python Code: A Differentiated Replication with LLMs Replications and Negative Results (RENE) Pre-print | ||
12:04 10mResearch paper | Sonar: Detecting Logic Bugs in DBMS through Generating Semantic-aware Non-Optimizing Query Research Track Shiyang Ye Zhejiang University, Chao Ni Zhejiang University, Jue Wang Nanjing University, Qianqian Pang zhejang university, Xinrui Li School of Software Technology, Zhejiang University, xiaodanxu College of Computer Science and Technology, Zhejiang university | ||
12:14 6mTalk | A Study on Applying Large Language Models to Issue Classification Replications and Negative Results (RENE) | ||
12:20 10mLive Q&A | Session's Discussion: "Vulnerabilities, Technical Debt, Defects" Research Track |