ICSE 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada
Thu 1 May 2025 14:45 - 15:00 at 213 - AI for Program Comprehension 2 Chair(s): Oscar Chaparro

Large language models for code (i.e., code LLMs) have shown strong code understanding and generation capabilities. To evaluate the capabilities of code LLMs in various aspects, many benchmarks have been proposed (e.g., HumanEval and ClassEval). Code reasoning is one of the most essential abilities of code LLMs, but existing benchmarks for code reasoning are not sufficient. Typically, they focus on predicting the input and output of a program, ignoring the evaluation of the intermediate behavior during program execution, as well as the logical consistency (e.g., the model should not give the correct output if the prediction of execution path is wrong) when performing the reasoning.
To address these problems, in this paper, we propose a framework, namely REval, for evaluating code reasoning abilities and consistency of code LLMs with program execution. We utilize existing code benchmarks and adapt them to new benchmarks within our framework. A large-scale empirical study is conducted and most LLMs show unsatisfactory performance on both Runtime Behavior Reasoning (i.e., an average accuracy of 44.4%) and Incremental Consistency Evaluation (i.e., an average IC score of 10.3). Evaluation results of current code LLMs reflect the urgent need for the community to strengthen the code reasoning capability of code LLMs.

Thu 1 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
AI for Program Comprehension 2Research Track at 213
Chair(s): Oscar Chaparro William & Mary
14:00
15m
Talk
Code Comment Inconsistency Detection and Rectification Using a Large Language Model
Research Track
Guoping Rong Nanjing University, YongdaYu Nanjing University, Song Liu Nanjing University, Xin Tan Nanjing University, Tianyi Zhang Nanjing University, Haifeng Shen Southern Cross University, Jidong Hu Zhongxing Telecom Equipment
14:15
15m
Talk
Context Conquers Parameters: Outperforming Proprietary LLM in Commit Message Generation
Research Track
Aaron Imani University of California, Irvine, Iftekhar Ahmed University of California at Irvine, Mohammad Moshirpour University of California, Irvine
14:30
15m
Talk
HedgeCode: A Multi-Task Hedging Contrastive Learning Framework for Code Search
Research Track
Gong Chen Wuhan University, Xiaoyuan Xie Wuhan University, Xunzhu Tang University of Luxembourg, Qi Xin Wuhan University, Wenjie Liu Wuhan University
14:45
15m
Talk
Reasoning Runtime Behavior of a Program with LLM: How Far Are We?
Research Track
Junkai Chen Zhejiang University, Zhiyuan Pan Zhejiang University, Xing Hu Zhejiang University, Zhenhao Li York University, Ge Li Peking University, Xin Xia Huawei
15:00
15m
Talk
Source Code Summarization in the Era of Large Language Models
Research Track
Weisong Sun Nanjing University, Yun Miao Nanjing University, Yuekang Li UNSW, Hongyu Zhang Chongqing University, Chunrong Fang Nanjing University, Yi Liu Nanyang Technological University, Gelei Deng Nanyang Technological University, Yang Liu Nanyang Technological University, Zhenyu Chen Nanjing University
15:15
15m
Talk
Template-Guided Program Repair in the Era of Large Language Models
Research Track
Kai Huang , Jian Zhang Nanyang Technological University, Xiangxin Meng Beihang University, Beijing, China, Yang Liu Nanyang Technological University