RealisticCodeBench: Towards More Realistic Evaluation of Large Language Models for Code Generation
This program is tentative and subject to change.
Evaluating the code generation capabilities of Large Language Models (LLMs) remains an open question. Existing benchmarks like HumanEval and MBPP focus primarily on algorithmic and basic programming tasks, which do not fully capture the intricacies of real-world coding challenges. Recently, more advanced benchmarks—such as CoderEval, EvoCodeBench, and ClassEval—have been introduced to address this gap, evaluating LLMs on practical coding tasks from GitHub repositories, such as non-standalone function generation and class-level code generation. However, even the most sophisticated LLMs struggle with these complex tasks; for instance, GPT-4 achieves only a 37.0% pass@1 on ClassEval. Prior studies show that developers often discard LLM-generated code or abandon code generation models when outputs are incorrect or require extensive debugging, which leads them to rely on LLMs primarily for simpler tasks that high-performing models can handle reliably.
In response to this gap, we introduce RealisticCodeBench, a benchmark specifically designed to reflect the types of problems developers commonly tackle with LLMs. By mining high-star GitHub repositories for code samples tagged as generated by ChatGPT or Copilot, we collect real-world coding tasks that capture typical LLM usage scenarios. We modify these tasks, generate reference solutions and test cases, and adapt the problems into multiple programming languages. This effort results in RealisticCodeBench, comprising a total of 417 programming problems translated across multiple languages: 392 in Python, 376 in JavaScript, 372 in TypeScript, 339 in Java, and 353 in C++, each with corresponding reference solutions and test cases. We evaluate 12 general-purpose and code-specific LLMs on RealisticCodeBench. Our findings reveal that GPT-4.1 achieves the highest average pass@1 score across languages, closely followed by DeepSeek-V3-671B, suggesting that DeepSeek-V3-671B provides a viable open-source alternative to GPT-4.1 for large companies with sufficient GPU resources and privacy concerns. CodeGeeX4-9B, a cost-effective model, emerges as a suitable substitute for GPT-3.5 for individual developers and smaller organizations with similar privacy considerations. Additionally, LLM performance discrepancies between HumanEval and RealisticCodeBench suggest that some LLMs are either overly specialized for HumanEval-style problems or insufficiently optimized for real-world coding challenges. Finally, we analyze failed cases, summarize common LLM limitations, and provide implications for researchers and practitioners.
This program is tentative and subject to change.
Tue 18 NovDisplayed time zone: Seoul change
11:00 - 12:30 | |||
11:00 10mTalk | Coverage-Based Harmfulness Testing for LLM Code Transformation Research Papers Honghao Tan Concordia University, Haibo Wang Concordia University, Diany Pressato Concordia University, Yisen Xu Software PErformance, Analysis, and Reliability (SPEAR) lab, Concordia University, Montreal, Canada, Shin Hwei Tan Concordia University | ||
11:10 10mTalk | RealisticCodeBench: Towards More Realistic Evaluation of Large Language Models for Code Generation Research Papers Xiao Yu Zhejiang University, Haoxuan Chen Wuhan University of Technology, Lei Liu Xi’an Jiaotong University, Xing Hu Zhejiang University, Jacky Keung City University of Hong Kong, Xin Xia Zhejiang University | ||
11:20 10mTalk | Code-DiTing: Automatic Evaluation of Code Generation without References or Test Cases Research Papers Guang Yang , Yu Zhou Nanjing University of Aeronautics and Astronautics, Xiang Chen Nantong University, Wei Zheng Northwestern Polytechnical University, Xing Hu Zhejiang University, Xin Zhou Singapore Management University, Singapore, David Lo Singapore Management University, Taolue Chen Birkbeck, University of London Pre-print | ||
11:30 10mTalk | An Agent-based Evaluation Framework for Complex Code Generation Research Papers Xinchen Wang Harbin Institute of Technology, Pengfei Gao ByteDance, Chao Peng ByteDance, Ruida Hu Harbin Institute of Technology, Shenzhen, Cuiyun Gao Harbin Institute of Technology, Shenzhen | ||
11:40 10mTalk | PseudoFix: Refactoring Distorted Structures in Decompiled C Pseudocode Research Papers Gangyang Li University of Science and Technology of China, Xiuwei Shang University of Science and Technology of China, Shaoyin Cheng University of Science and Technology of China, junqi zhang University of Science and Technology of China, Li Hu , Xu Zhu University of Science and Technology of China, Weiming Zhang University of Science and Technology of China, Nenghai Yu School of Cyber Security, University of Science and Technology of China | ||
11:50 10mTalk | Evaluating and Improving Framework-based Parallel Code Completion with Large Language Models Research Papers Ke Liu , Qinglin Wang Shandong Normal University, Xiang Chen Nantong University, Guang Yang , YiGui Feng National University of Defense Technology, Gencheng Liu National University of Defense Technology, Jie Liu Institute of Software, Chinese Academy of Sciences | ||
12:00 10mTalk | Variational Prefix Tuning for diverse and accurate code summarization using pre-trained language models Journal-First Track Junda Zhao Department of Mechanical and Industrial Engineering, University of Toronto, Yuliang Song Department of Mechanical and Industrial Engineering, University of Toronto, Eldan Cohen Department of Mechanical and Industrial Engineering, University of Toronto | ||
12:10 10mTalk | Effective Code Membership Inference for Code Completion Models via Adversarial Prompts Research Papers Yuan Jiang Harbin Institute of Technology, Zehao Li Harbin Institute of Technology, Shan Huang East China Normal University, Christoph Treude Singapore Management University, Xiaohong Su Harbin Institute of Technology, Tiantian Wang Harbin Institute of Technology | ||
12:20 10mTalk | LongCodeZip: Compress Long Context for Code Language Models Research Papers Yuling Shi Shanghai Jiao Tong University, Yichun Qian Stanford University, Hongyu Zhang Chongqing University, Beijun Shen Shanghai Jiao Tong University, Xiaodong Gu Shanghai Jiao Tong University Pre-print Media Attached | ||