ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Thu 31 Oct 2024 15:30 - 15:45 at Carr - Code smells

Detecting and refactoring code smells is challenging, laborious, and sustaining. Although large language models have demonstrated potential in identifying various types of code smells, they also have limitations such as input-output token restrictions, difficulty in accessing repository-level knowledge, and performing dynamic source code analysis.Existing learning-based methods or commercial expert toolsets have advantages in handling complex smells. They can analyze project structures and contextual information in-depth, access global code repositories, and utilize advanced code analysis techniques. However, these toolsets are often designed for specific types and patterns of code smells and can only address fixed smells, lacking flexibility and scalability.To resolve that problem, we propose iSMELL, an ensemble approach that employs various code smell detection toolsets via Mixture of Experts (MoE) architecture for comprehensive code smell detection, and enhances the LLMs with the detection results from expert toolsets for refactoring those identified code smells. First, we train a MoE model that, based on input code vectors, outputs the most suitable expert tool for identifying each type of smell. Then, we select the recommended toolsets for code smell detection and obtain their results. Finally, we equip the prompts with the detection results from the expert toolsets, thereby enhancing the refactoring capability of LLMs for code with existing smells, enabling them to provide different solutions based on the type of smell. We evaluate our approach on detecting and refactoring three classical and complex code smells, i.e., Refused Bequest, God Class, and Feature Envy. The results show that, by adopting seven expert code smell toolsets, iSMELL achieved an average F1 score of 75.17% on code smell detection, outperforming LLM baselines by an increase of 35.05% in F1 score. We further evaluate the code refactored by the enhanced LLM. The quantitative and human evaluation results show that iSMELL could improve code quality metrics and conduct satisfactory refactoring toward the identified code smells.We believe that our proposed solution could provide new insights into better leveraging LLMs and existing approaches to resolving complex software tasks.

Thu 31 Oct

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:30
15:30
15m
Talk
iSMELL: Assembling LLMs with Expert Toolsets for Code Smell Detection and Refactoring
Research Papers
Di Wu , Fangwen Mu Institute of Software, Chinese Academy of Sciences, Lin Shi Beihang University, Zhaoqiang Guo Software Engineering Application Technology Lab, Huawei, China, Kui Liu Huawei, Weiguang Zhuang Beihang University, Yuqi Zhong Beihang University, Li Zhang Beihang University
15:45
15m
Talk
A Position-Aware Approach to Decomposing God Classes
Research Papers
Tianyi Chen Beijing Institute of Technology, Yanjie Jiang Peking University, Fu Fan Beijing Institute of Technology, Bo Liu Beijing Institute of Technology, Hui Liu Beijing Institute of Technology
16:00
15m
Talk
Three Heads Are Better Than One: Suggesting Move Method Refactoring Opportunities with Inter-class Code Entity Dependency Enhanced Hybrid Hypergraph Neural Network
Research Papers
Di Cui Xidian University, Jiaqi Wang Xidian University, Qiangqiang Wang Xidian University, Peng Ji Xidian University, Minglang Qiao Xidian University, Yutong Zhao University of Central Missouri, Jingzhao Hu Xidian University, Luqiao Wang Xidian University, Qingshan Li Xidian University
16:15
10m
Talk
Copilot-in-the-Loop: Fixing Code Smells in Copilot-Generated Python Code using Copilot
NIER Track
Beiqi Zhang Wuhan University, Peng Liang Wuhan University, China, Qiong Feng Nanjing University of Science and Technology, Yujia Fu Wuhan University, Zengyang Li Central China Normal University
DOI Pre-print