EASE 2026
Tue 9 - Fri 12 June 2026 Glasgow, United Kingdom

Call for Papers

Empirical evidence underpins much of software engineering (SE) research, yet its long-term value depends on whether findings can be reproduced, replicated, and extended. This EASE track invites contributions that examine the robustness of prior empirical software engineering work--whether by successfully reproducing results, revealing reproducibility barriers, or uncovering surprising limitations.

Where studies are reproducible, we are also interested in their extensibility to other datasets, methods, or contexts, or in extensions of the original scope. More importantly, a paper solely reporting that some prior work’s results were reproduced is not sufficient. For papers that could not be replicated, we are interested in the specific issues that led to the authors’ conclusions, such as:

  • Data or code availability.
  • Lack of instructions on how to run the code, if it is not obvious (for example, if a repository has a main.py file with no explicit instructions, this does not qualify).
  • Code errors caused by dependency versions. Such papers must additionally attempt to fix those versions and document any changes made. A straightforward solution is to use the latest releases available at the time of the prior paper’s publication.
  • Obtaining results that are statistically significantly different, despite repeated, good-faith attempts (e.g., after changing parameters in the code).

Topics

We welcome submissions in two broad categories, particularly where they relate to the evaluation and assessment of software products, processes, practices, tools, or techniques in any of the EASE topic areas:

  1. Replications and Reproducibility Studies
    • Successful replications of peer-reviewed SE research, with discussion of extensions to new datasets, methods, domains, or contexts related to EASE topics.
    • Partial or unsuccessful replications, with clear analysis of causes such as missing or inaccessible artefacts, insufficient documentation, dependency incompatibilities, or statistically divergent outcomes.
    • Work that addresses reproducibility challenges, including attempts to repair problems and measure the effects of these interventions.
  2. Negative and Surprising Results
    • Well-motivated methods or approaches that, despite being reasonable and aligned with best practice, fail to achieve expected results, together with analysis and implications.
    • Ablation studies of components in previously proposed models that demonstrate their contributions differ from what was originally reported. For example, if a paper attributed significant improvements to component X, but the ablation study reveals that most of the improvement is actually due to component Y.
    • Evidence that widely adopted methods fail to generalise to new datasets, domains, or settings.
    • Simple baselines that match or outperform more complex methods.
    • Sensitivity analyses showing instability of prior results due to factors such as hardware, random initialisation, or preprocessing.
    • Critiques of common empirical software engineering metrics, evaluation methods, or data practices that undermine fair comparison.

Please note that the above list of topics is not exhaustive.

Evaluation Criteria

The submissions to the RENE track will be evaluated based on the following criteria:

  • Rigorousness of the conducted studies.
  • Quality of writing.
  • Amount of useful, actionable insights.
  • Availability of artifacts.

A paper reporting negative results due primarily to misaligned expectations or due to lack of statistical power (small samples) is not sufficient for a submission to the RENE track. The negative result should be a result of a lack of effect, not lack of methodological rigor.

By highlighting reproducibility challenges and reporting well-founded negative results, this track aims to strengthen the credibility, transparency, and methodological rigour of empirical software engineering research. Submissions should present a clear empirical basis, provide actionable insights for researchers and practitioners, and demonstrate relevance to the broader EASE community.

Submission Instructions

Authors should use the official ACM Primary Article Template for their manuscripts. Please note that using a wrong template may lead to a desk rejection. For LaTeX users, the following options should be specified:

\documentclass[sigconf,review,anonymous]{acmart}
\acmConference[EASE 2026]{The 30th International Conference on Evaluation and Assessment in Software Engineering}{9–12 June, 2026}{Glasgow, Scotland, United Kingdom}

All papers must be submitted in the PDF format through the web-based submission system EasyChair for EASE 2026. Submissions must not exceed 10 pages for the main text, including all figures, tables, appendices, etc. Up to 2 additional pages containing ONLY references are allowed.

We will invite the authors of the best papers to submit an extension of their work to a Special Issue of Journal of Software: Evolution and Process. More details will be provided later on the conference website.

Important Dates

Abstract Submission Deadline: Mon 23 Feb 2026
Submission Deadline: Mon 2 Mar 2026
Notification Deadline: Mon 6 Apr 2026
Camera Ready Deadline: Mon 20 Apr 2026