Leveraging Large Language Model to Assist Detecting Rust Code Comment Inconsistency
This program is tentative and subject to change.
Rust is renowned for its robust memory safety capabilities, yet its distinctive memory management model poses substantial challenges in both writing and understanding programs. Within Rust source code, comments are employed to clearly delineate conditions that might cause panic behavior, thereby warning developers about potential hazards associated with specific operations. Therefore, comments are particularly crucial for documenting Rust’s program logic and design. Nevertheless, as modern software frequently undergoes updates and modifications, maintaining the accuracy and relevance of these comments becomes a labor-intensive endeavor.
In this paper, inspired by the remarkable capabilities of Large Language Models (LLMs) in understanding software programs, we propose a code-comment inconsistency detection tool, namely RustC4, that combines program analysis and LLM-driven techniques to identify inconsistencies in code comments. RustC4 leverages the LLMs’ ability to interpret natural language descriptions within code comments, facilitating the extraction of design constraints. Program analysis techniques are then employed to accurately verify the implementation of these constraints. To evaluate the effectiveness of RustC4, we construct a dataset from 12 large-scale real-world Rust projects containing 180 inconsistent pairs. The experiment results demonstrate that RustC4 is effective in detecting 177 real inconsistent cases and 23 of them have been confirmed and fixed by developers by the time this paper was submitted.
This program is tentative and subject to change.
Tue 29 OctDisplayed time zone: Pacific Time (US & Canada) change
13:30 - 15:00 | LLM for SE 1Research Papers / NIER Track / Tool Demonstrations / Journal-first Papers at Camellia Chair(s): Chengcheng Wan East China Normal University | ||
13:30 15mTalk | How Effective Do Code Language Models Understand Poor-Readability Code? Research Papers Chao Hu Shanghai Jiao Tong University, Yitian Chai School of Software, Shanghai Jiao Tong University, Hao Zhou Pattern, Recognition Center, WeChat, Tencent, Fandong Meng WeChat AI, Tencent, Jie Zhou Tencent, Xiaodong Gu Shanghai Jiao Tong University | ||
13:45 15mTalk | An Empirical Study to Evaluate AIGC Detectors on Code Content Research Papers Jian Wang Nanyang Technological University, Shangqing Liu Nanyang Technological University, Xiaofei Xie Singapore Management University, Yi Li Nanyang Technological University Pre-print | ||
14:00 15mTalk | Distilled GPT for source code summarization Journal-first Papers | ||
14:15 15mTalk | Leveraging Large Language Model to Assist Detecting Rust Code Comment Inconsistency Research Papers Zhang Yichi , Zixi Liu Nanjing University, Yang Feng Nanjing University, Baowen Xu Nanjing University | ||
14:30 10mTalk | LLM-Based Java Concurrent Program to ArkTS Converter Tool Demonstrations Runlin Liu Beihang University, Yuhang Lin Zhejiang University, Yunge Hu Beihang University, Zhe Zhang Beihang University, Xiang Gao Beihang University | ||
14:40 10mTalk | Towards Leveraging LLMs for Reducing Open Source Onboarding Information Overload NIER Track | ||
14:50 10mTalk | CoDefeater: Using LLMs To Find Defeaters in Assurance Cases NIER Track Usman Gohar Dept. of Computer Science, Iowa State University, Michael Hunter Iowa State University, Robyn Lutz Iowa State University, Myra Cohen Iowa State University |