How Effective Do Code Language Models Understand Poor-Readability Code?
Code language models such as CodeT5 and CodeLlama have demonstrated substantial achievement in code comprehension. While the majority of research efforts have focused on improving model architectures and training processes, we find that the current benchmarks used for evaluating code comprehension models are confined to high-readability code, regardless of the popularity of low-readability code in reality. As such, they are inadequate to demonstrate the fine-grained ability of models, particularly the robustness to varying readability degrees. In this paper, we comprehensively analyze the robustness of code summarization models to code with varying readability, including seven obfuscated datasets derived from existing benchmarks. Our findings indicate that current code comprehension models are sensitive to code with varying readability. In particular, their performance predominantly depends on semantic cues within the code, often neglecting the syntactic aspects. Existing benchmarks are biased toward evaluating semantic features, thereby overlooking the models’ ability to understand non-sensitive syntactic features. Based on the findings, we present R-CodeSumEval, a new evaluation benchmark on code summarization tasks. R-CodeSumEval innovatively introduces readability into the testing process, considering semantic, syntactic, and their cross-obfuscation, thereby providing a more comprehensive and rigorous evaluation of code summarization models. Our studies also provide more insightful suggestions for future research, such as constructing new benchmarks to evaluate the robustness of models on poor-readability code, proposing readability-awareness metrics, and automatic methods for code data cleaning and normalization.
Tue 29 OctDisplayed time zone: Pacific Time (US & Canada) change
13:30 - 15:00 | LLM for SE 1Research Papers / NIER Track / Tool Demonstrations / Journal-first Papers at Camellia Chair(s): Chengcheng Wan East China Normal University | ||
13:30 15mTalk | How Effective Do Code Language Models Understand Poor-Readability Code? Research Papers Chao Hu Shanghai Jiao Tong University, Yitian Chai School of Software, Shanghai Jiao Tong University, Hao Zhou Pattern, Recognition Center, WeChat, Tencent, Fandong Meng WeChat AI, Tencent, Jie Zhou Tencent, Xiaodong Gu Shanghai Jiao Tong University | ||
13:45 15mTalk | An Empirical Study to Evaluate AIGC Detectors on Code Content Research Papers Jian Wang Nanyang Technological University, Shangqing Liu Nanyang Technological University, Xiaofei Xie Singapore Management University, Yi Li Nanyang Technological University Pre-print | ||
14:00 15mTalk | Distilled GPT for source code summarization Journal-first Papers | ||
14:15 15mTalk | Leveraging Large Language Model to Assist Detecting Rust Code Comment Inconsistency Research Papers Zhang Yichi , Zixi Liu Nanjing University, Yang Feng Nanjing University, Baowen Xu Nanjing University | ||
14:30 10mTalk | LLM-Based Java Concurrent Program to ArkTS Converter Tool Demonstrations Runlin Liu Beihang University, Yuhang Lin Zhejiang University, Yunge Hu Beihang University, Zhe Zhang Beihang University, Xiang Gao Beihang University | ||
14:40 10mTalk | Towards Leveraging LLMs for Reducing Open Source Onboarding Information Overload NIER Track | ||
14:50 10mTalk | CoDefeater: Using LLMs To Find Defeaters in Assurance Cases NIER Track Usman Gohar Dept. of Computer Science, Iowa State University, Michael Hunter Iowa State University, Robyn Lutz Iowa State University, Myra Cohen Iowa State University |