AdaptEval: A Benchmark for Evaluating Large Language Models on Code Snippet Adaptation
This program is tentative and subject to change.
Recent advancements in large language models (LLMs) have automated various software engineering tasks, with benchmarks emerging to evaluate their capabilities. However, for adaptation, a critical activity during code reuse, there is no benchmark to assess LLMs’ performance, leaving their practical utility in this area unclear. To fill this gap, we propose AdaptEval, a benchmark designed to evaluate LLMs on code snippet adaptation. Unlike existing benchmarks, AdaptEval incorporates three distinctive features: First, \textbf{\textit{practical context}}. Tasks in AdaptEval are derived from developers’ practices, preserving rich contextual information from Stack Overflow and GitHub communities. Second, \textbf{\textit{multi-granularity annotation}}. Each task is annotated with requirements at both task and adaptation levels, supporting the evaluation of LLMs across diverse adaptation scenarios. Third, \textbf{\textit{fine-grained evaluation}}. AdaptEval includes a two-tier testing framework combining adaptation-level and function-level tests, which enables evaluating LLMs’ performance across various individual adaptations. Based on AdaptEval, we conduct the first empirical study to evaluate six instruction-tuned LLMs and especially three reasoning LLMs on code snippet adaptation. Experimental results demonstrate that AdaptEval enables the assessment of LLMs’ adaptation capabilities from various perspectives. It also provides critical insights into their current limitations, particularly their struggle to follow explicit instructions. We hope AdaptEval can facilitate further investigation and enhancement of LLMs’ capabilities in code snippet adaptation, supporting their applications in the real-world software reuse.
This program is tentative and subject to change.
Wed 19 NovDisplayed time zone: Seoul change
11:00 - 12:30 | |||
11:00 10mTalk | Automated Inline Comment Smell Detection and Repair with Large Language Models Research Papers Hatice Kübra Çağlar Bilkent University, Semih Çağlar Bilkent University, Eray Tüzün Bilkent University Pre-print | ||
11:10 10mTalk | What’s DAT Smell? Untangling and Weaving the Disjoint Assertion Tangle Test Smell Research Papers Monil Narang University of California, Irvine, Hang Du University of California at Irvine, James Jones University of California at Irvine Pre-print | ||
11:20 10mTalk | Your Build Scripts Stink: The State of Code Smells in Build Scripts Research Papers Mahzabin Tamanna North Carolina State University, Yash Chandrani North Carolina State University, Matthew Burrows North Carolina State University, Brandon Wroblewski North Carolina State University, Dominik Wermke North Carolina State University, Laurie Williams North Carolina State University | ||
11:30 10mTalk | Do Experts Agree About Smelly Infrastructure? Journal-First Track Sogol Masoumzadeh Mcgill University, Nuno Saavedra INESC-ID and IST, University of Lisbon, Rungroj Maipradit University of Waterloo, Lili Wei McGill University, João F. Ferreira INESC-ID and IST, University of Lisbon, Daniel Varro Linköping University / McGill University, Shane McIntosh University of Waterloo | ||
11:40 10mTalk | Wired for Reuse: Automating Context-Aware Code Adaptation in IDEs via LLM-Based Agent Research Papers Taiming Wang Beijing Institute of Technology, Yanjie Jiang Peking University, Chunhao Dong Beijing Institute of Technology, Yuxia Zhang Beijing Institute of Technology, Hui Liu Beijing Institute of Technology | ||
11:50 10mTalk | BinStruct: Binary Structure Recovery Combining Static Analysis and Semantics Research Papers Yiran Zhang , Zhengzi Xu Imperial Global Singapore, Zhe Lang Institute of Information Engineering, CAS, CHENGYUE LIU , Yuqiang Sun Nanyang Technological University, Wenbo Guo School of Cyber Science and Engineering, Sichuan University, Chengwei Liu Nanyang Technological University, Weisong Sun Nanyang Technological University, Yang Liu Nanyang Technological University | ||
12:00 10mTalk | SateLight: A Satellite Application Update Framework for Satellite Computing Research Papers Jinfeng Wen Beijing University of Posts and Telecommunications, Jianshu Zhao Beijing University of Posts and Telecommunications, Zixi Zhu Beijing University of Posts and Telecommunications, Xiaomin Zhang Beijing University of Posts and Telecommunications, Qi Liang Beijing University of Posts and Telecommunications, Ao Zhou Beijing University of Posts and Telecommunications, Shangguang Wang Beijing University of Posts and Telecommunications | ||
12:10 10mTalk | ComCat: Expertise-Guided Context Generation to Enhance Code Comprehension Journal-First Track Skyler Grandel Vanderbilt University, Scott Andersen National Autonomous University of Mexico, Yu Huang Vanderbilt University, Kevin Leach Vanderbilt University | ||
12:20 10mTalk | AdaptEval: A Benchmark for Evaluating Large Language Models on Code Snippet Adaptation Research Papers Tanghaoran Zhang National University of Defense Technology, Xinjun Mao National University of Defense Technology, Shangwen Wang National University of Defense Technology, Yuxin Zhao Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, Yao Lu National University of Defense Technology, Jin Zhang Hunan Normal University, Zhang Zhang Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, Kang Yang National University of Defense Technology, Yue Yu PengCheng Lab | ||