YABLoCo: Yet Another Benchmark for Long Context Code Generation (Virtual Talk)
Large Language Models (LLMs) demonstrate the ability to solve various programming tasks, including code generation. Typically, the performance of LLMs is measured on benchmarks with small or medium-sized context windows of thousands of lines of code (LoC). At the same time, in real-world software projects, repositories can span up to millions of LoC. Our work closes this gap by contributing to the long context code generation benchmark (YABLoCo). The benchmark features a test set of 215 functions selected from four large repositories with thousands of functions. The dataset contains metadata of functions, contexts of the functions with different levels of dependencies, docstrings, functions’ bodies, and call graphs for each repository. In the paper we present three key aspects of the contribution. First, the benchmark aims at function body generation in large repositories in C and C++. These two languages were not covered by previous benchmarks. Second, the benchmark contains large repositories from 200K to 2,000K LoC.
Third, we contribute a scalable evaluation pipeline for efficient computing of the target metrics and a tool for visual analysis of generated code. Overall, these three aspects make it possible to evaluate code generation in large repositories in C/C++. The dataset as well as the code for the evaluation pipeline are available at https://github.com/yabloco-codegen/yabloco-benchmark