ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil
Thu 16 Apr 2026 16:45 - 17:00 at Europa II - AI for Software Engineering 18 Chair(s): Moritz Beller

Large Language Models have gained remarkable interest in industry and academia. The increasing interest in LLMs in academia is also reflected in the number of publications on this topic over the last years. For instance, alone 78 of the around 425 publications at ICSE 2024 performed experiments with LLMs. Conducting empirical studies with LLMs remains challenging and raises questions on how to achieve reproducible results, for both other researchers and practitioners. One important step towards excelling in empirical research on LLM and their application is to first understand to what extent current research results are eventually reproducible and what factors may impede reproducibility. This investigation is within the scope of our work. We contribute an analysis of the reproducibility of LLM-centric studies, provide insights into the factors impeding reproducibility, and discuss suggestions on how to improve the current state. In particular, we studied the 86 articles describing LLM-centric studies, published at ICSE 2024 and ASE 2024. Of the 86 articles, 18 provided research artefacts and used OpenAI models. We attempted to replicate those 18 studies. Of the 18 studies, only five were fit for reproduction. For none of the five studies, we were able to fully reproduce the results. Two studies seemed to be partially reproducible, and three studies did not seem to be reproducible. Our results highlight not only the need for stricter research artefact evaluations but also for more robust study designs to ensure the reproducible value of future publications.

Thu 16 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

16:00 - 17:30
AI for Software Engineering 18Research Track at Europa II
Chair(s): Moritz Beller Meta Platforms, Inc., USA
16:00
15m
Talk
Are “Solved Issues” in SWE-bench Really Solved Correctly? An Empirical Study
Research Track
You Wang Zhejiang University, Michael Pradel CISPA Helmholtz Center for Information Security, Zhongxin Liu Zhejiang University
16:15
15m
Talk
EmbedAgent: Benchmarking Large Language Models in Embedded System DevelopmentVirtual Attendance
Research Track
Ruiyang Xu University of Chinese Academy of Sciences, Jialun Cao Hong Kong University of Science and Technology, Mingyuan Wu Southern University of Science and Technology, Wenliang Zhong Institute of Software, Chinese Academy of Sciences, Yaojie Lu Institute of Software, Chinese Academy of Sciences, Ben He University of Chinese Academy of Sciences, Xianpei Han Institute of Software, Chinese Academy of Sciences, Shing-Chi Cheung Hong Kong University of Science and Technology, Le Sun Institute of Software, Chinese Academy of Sciences
Media Attached
16:30
15m
Talk
When Prompts Go Wrong: Evaluating Code Model Robustness to Ambiguous, Contradictory, and Incomplete Task Descriptions
Research Track
Maya LARBI University of Luxembourg, Amal Akli University of Luxembourg, Mike Papadakis University of Luxembourg, Rihab BOUYOUSFI Ecole nationale Supérieure d’Informatique (ESI), Maxime Cordy University of Luxembourg, Luxembourg, Federica Sarro University College London, Yves Le Traon University of Luxembourg, Luxembourg
Pre-print
16:45
15m
Talk
Reflections on the Reproducibility of Commercial LLM Performance in Empirical Software Engineering Studies
Research Track
Florian Angermeir fortiss, Maximilian Amougou fortiss GmbH, Mark Kreitz University of the Bundeswehr Munich, Andreas Bauer Technische Hochschule Nürnberg Georg Simon Ohm, Matthias Linhuber Technical University Munich, Davide Fucci Blekinge Institute of Technology, Fabiola Moyón Siemens Technology and Technical University of Munich, Daniel Mendez Blekinge Institute of Technology and fortiss, Tony Gorschek Blekinge Institute of Technology / DocEngineering
DOI Pre-print
17:00
15m
Talk
FreshBrew: A Benchmark for Evaluating AI Agents on Java Code MigrationVirtual Attendance
Research Track
Victor May Google, Diganta Misra Max Planck Institut für Intelligente Systeme (MPI-IS) and ELLIS Institute, Tübingen, Yanqi Luo Salesforce, Anjali Sridhar Google, Justine Gehring Gologic, Silvio Soares Ribeiro Junior Google
17:15
15m
Talk
ProxyWar: Dynamic Assessment of LLM Code Generation in Game ArenasDistinguished Paper Award
Research Track
Xinyu Wang The University of Adelaide, Wenjun Peng The University of Adelaide, Qi Wu University of Adelaide