ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

Large language models (LLMs) have revolutionized automated code generation, yet the evaluation of their real-world effectiveness remains limited by static benchmarks and simplistic metrics. We present ProxyWar, a novel framework that systematically assesses code generation quality by embedding LLM-generated agents within diverse, competitive game environments. Unlike existing approaches, ProxyWar evaluates both functional correctness and strategic performance, combining automated testing, iterative code repair, and multi-agent tournaments to provide a holistic view of code quality. Applied to a range of state-of-the-art coders and games, our approach uncovers notable discrepancies between benchmark scores and actual performance in dynamic settings, revealing overlooked limitations and opportunities for improvement. These findings highlight the need for richer, competition-based evaluation of code generation. Looking forward, ProxyWar lays a foundation for research into LLM-driven algorithm discovery and adaptive problem solving, including the potential for models to outperform hand-crafted agents. All code and evaluation environments will be released to foster further research and reproducibility.