AdaptEval: A Benchmark for Evaluating Large Language Models on Code Snippet Adaptation
Recent advancements in large language models (LLMs) have automated various software engineering tasks, with benchmarks emerging to evaluate their capabilities. However, for adaptation, a critical activity during code reuse, there is no benchmark to assess LLMs’ performance, leaving their practical utility in this area unclear. To fill this gap, we propose AdaptEval, a benchmark designed to evaluate LLMs on code snippet adaptation. Unlike existing benchmarks, AdaptEval incorporates three distinctive features: First, \textbf{\textit{practical context}}. Tasks in AdaptEval are derived from developers’ practices, preserving rich contextual information from Stack Overflow and GitHub communities. Second, \textbf{\textit{multi-granularity annotation}}. Each task is annotated with requirements at both task and adaptation levels, supporting the evaluation of LLMs across diverse adaptation scenarios. Third, \textbf{\textit{fine-grained evaluation}}. AdaptEval includes a two-tier testing framework combining adaptation-level and function-level tests, which enables evaluating LLMs’ performance across various individual adaptations. Based on AdaptEval, we conduct the first empirical study to evaluate six instruction-tuned LLMs and especially three reasoning LLMs on code snippet adaptation. Experimental results demonstrate that AdaptEval enables the assessment of LLMs’ adaptation capabilities from various perspectives. It also provides critical insights into their current limitations, particularly their struggle to follow explicit instructions. We hope AdaptEval can facilitate further investigation and enhancement of LLMs’ capabilities in code snippet adaptation, supporting their applications in the real-world software reuse.