ICSA 2025
Mon 31 March - Fri 4 April 2025 Odense, Denmark

Call for Artifacts

The ICSA 2025 Artifact Evaluation Track (AET) aims to review and promote the research artifacts of accepted papers at ICSA 2025. Artifacts can be software systems, scripts, or datasets related to accepted papers at ICSA 2025. High-quality artifacts of published research papers are a foundation for the research results to be reproduced by other researchers and are thus a desirable part of the publication itself. Tools, in the context of this call, are software systems supporting software architects or software architecture researchers, independently of a previously accepted research paper.

General Information

Authors of the papers accepted to the following ICSA 2025 tracks are expected to submit an artifact for evaluation for the Research Object Reviewed (ROR) badges and the Open Research Object (ORO) badge: Technical Track, Journal-First Track, Software Architecture in Practice Track, and New and Emerging Ideas Track. All authors of papers related to the topics mentioned in the call for papers of the ICSA technical track are invited to submit studies for the Results Reproduced (ROR-R) and Results Replicated (RER) badges.

Important: we require one author of each artifact submission to peer-review 2 or 3 other submitted artifacts.

Candidate Artifacts

Artifacts of interest include (but are not limited to) the following:

  • Software, which are implementations of systems or algorithms potentially useful in other studies
  • Data repositories, which are data (e.g., logging data, system traces, raw data of a survey, raw data of a literature review) that can be used for multiple software engineering approaches
  • Frameworks, which are tools and services illustrating new approaches to software architecture that could be used by other researchers in different contexts

This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list. Further information on data sharing principles and approaches are further introduced along with an introduction of the general notion of open science in the book chapter Open Science in Software Engineering by Méndez, Graziotin, Wagner, and Seibold: https://arxiv.org/abs/1904.06499.

The best artifact selected by the reviewers will be awarded the best artifact award.

For accepted ICSA 2025 papers, we will integrate the badge on the paper in the official IEEE proceedings.

Evaluation Criteria and Badges

Evaluation criteria for badges and additional information have been taken from ICSE 2025 which are based on the ACM policy on Artifact Review and Badging Version 1.1, and from the NISO Recommended Practice on Reproducibility Badging and Definitions, which is supported by our publisher, IEEE.

The ICSA 2025 artifact evaluation track uses a single-blind review process. The reviewers for the artifacts will be a combination of artifact authors and ICSA program committee members. Artifacts will be evaluated using the criteria in the last two rows of the table below.


The badges to be awarded as well as the evaluation criteria are the following ones.

Research Object Reviewed (ROR) Open Research Object (ORO) Results Validated
ROR Functional ROR Reusable Results Reproduced (ROR-R) Results Replicated (RER)
Artifacts documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation (corresponds to ACM “Functional” badge) Functional + very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to. (corresponds to ACM “Reusable” badge) Author-created artifacts relevant to this paper have been placed in a publicly accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided (corresponds to ACM “Available” badge) Functional + main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author. (corresponds to ACM “Reproduced” badge) ROR + ORO + the main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts. (corresponds to ACM “Replicated” badge)
Authors submit their artifact; Reviewers can execute the artifact and get initial results that suggest that the artifact can be used to reproduce the results of the paper (not necessarily all, but as claimed in the paper and artifact) Authors submit their artifact; Reviewers find that is functional and additionally that the documentation and structure of the artifact allow to easily reuse it Authors submit their artifact; Reviewers confirm that all artifacts relevant to this paper are available (1) Independent researchers submit evidence that results of previous ICSA papers were independently reproduced OR (2) Authors submit their artifact and reviewers were able to reproduce all results in the paper (only if doable with low effort) Independent researchers submit evidence that results of previous ICSA papers were independently replicated.



Papers with such badges contain functional and reusable products that other researchers can use to bootstrap their research. Experience shows that such papers earn increased citations and greater prestige in the research community.

General information and further suggested reading about artifact evaluation can be viewed in a public draft standard by Baldassarre, Ernst, Hermann, and Menzies.

Important Dates

  • Artifact Evaluation Registration Deadline: January 8th, 2025
  • Artifact Evaluation Submissions Deadline: January 15th, 2025
  • Artifact Evaluation Notification: February 16th, 2025

Notes: All deadlines are 23:59h AoE (anywhere on Earth)

Submission and Review

The submission and review criteria are mainly taken from ICSE 2025, with some adaptions. Different badges have different submission procedures as described below. (1) Submission Process for All Artifacts, (2) Submitting Results Reproduced (ROR-R) and Results Replicated (RER) Badges (3) Submitting Tools

Submission Process for All Artifacts

Authors must perform the following steps to submit an artifact (including tools):

  • Preparing the artifact
  • Documenting the artifact
  • Making the artifact available
  • Submitting the artifact
  • Clarification period

1. Preparing the Artifact

There are two options depending on the nature of the artifacts: Installation Package or Simple Package. In both cases, the configuration and installation of the artifact should take less than 30 minutes. Otherwise, the artifact is unlikely to be endorsed simply because the committee will not have sufficient time to evaluate it.

Installation Package. If the artifact consists of a tool or software system, then the authors need to prepare an installation package so that the tool can be installed and run in the evaluator’s environment. Provide enough associated instruction, code, and data such that some CS person with a reasonable knowledge of scripting, build tools, etc. could install, build, and run the code. If the artifact contains or requires the use of a special tool or any other non-trivial piece of software the authors must provide a virtual machine in the Open Virtual Appliance (OVA) format (e.g., using Oracle Virtual Box or VMware), or a Docker container image with a working environment containing the artifact and all the necessary tools. Alternatively, authors can provide a link to a web service that allows reviewers to execute the artifact (e.g., using Jupyter Notebooks or codeocean.com). Otherwise, the proposal may be desk-rejected.

Simple Package. If the artifact only contains documents that can be used with a simple text editor, a PDF viewer, or some other common tool (e.g., a spreadsheet program in its basic configuration) the authors can just save all documents in a single package file (zip or tar.gz).

2. Documenting the Artifact

The authors need to write and submit documentation explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in more detail. The artifact submission must only describe the technicalities of the artifacts and uses of the artifact that are not already described in the paper.

On the submission site, the following documents should be provided (in Markdown, Plain-text or PDF format).

  • A README main file describing what the artifact does and where it can be obtained (with hidden links and access password if necessary). Also, there should be a clear step-by-step description about how to reproduce the results presented in the paper (i.e., tables, figures, and other reported results), including links to all necessary scripts and input data. If the results presented in the paper take more than 1 hour on a standard laptop to be reproduced, authors are expected to additionally provide simplified examples with runtimes of less than one hour. If the results in the paper have been generated with an artifact using data that underlies non-disclosure agreements, authors should provide artificial replacement data that allows reviewers to assess the artifact independently of that data.
  • An INSTALL file with step-by-step installation instructions (if any). These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, on what output to expect that confirms that the code is installed and working; and the code is doing something interesting and useful.
  • A copy of the accepted paper in pdf format, if any.

Note that for the badge Research Object Reviewed (ROR) – reusable, extensive documentation of the artifact is needed that allows other researchers to repurpose the tool. For software artifacts, this requires to include an overview of the artifact source code and its architecture.

3. Making the Artifact Available

The authors need to make the packaged artifact (installation package or simple package and its documentation) available so that the Evaluation Committee can access it. We suggest a link to a public repository (e.g., GitHub) or to a single archive file in a widely available archive format. If the authors are aiming for the badges Open Research Object (ORO) and beyond, the artifact needs to be publicly accessible in an archival repository that guarantees long-time storage.

We suggest using Zenodo, Figshare , or Software Heritage (see their submission guide) or similar free services that provide Digital Object Identifiers (DOIs). In other cases, the artifacts do not necessarily have to be publicly accessible for the review process. In this case, the authors are asked to provide a private link or a password-protected link. In any case, we encourage the authors to use permanent repositories dedicated to data sharing where no registration is necessary for those accessing the artifacts (e.g., please avoid using services such as Google Drive).

Note that to score Open Research Object (ORO) or higher, the license needs to reflect some form of open source license.

4. Submitting the Artifact

By the artifact registration deadline (see above), register your research artifact at the site (TBD).

By the artifact submission deadline, complete your artifact submission by providing all requested information. Before the actual evaluation, reviewers will check the integrity of the artifact and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.).

5. Clarification period

During the review, the reviewers may contact the authors to request clarifications on the basic installation and start-up procedures, to resolve simple installation problems or other questions. Please make sure that at least one author is available during this time to respond to reviewer questions. More details on this procedure can be found below in Section Communication during the Clarification Period. Given the short review time available, the authors are expected to respond within 72 hours. Authors may update their research artifacts after submission only for changes requested by reviewers in the clarification period.

Submitting Results Reproduced (ROR-R) and Results Replicated (RER) Badges

For Results Reproduced (ROR-R) and Results Replicated (RER) badges, authors will need to offer appropriate documentation in their abstract that their artifacts have reached that stage. The abstract should include the paper title, the purpose of the research artifact, the badge(s) you are claiming, and the technology skills assumed by the reviewer evaluating the artifact. Please also mention if running your artifact requires specific Operating Systems or other environments.

TITLE: A (Partial)? (Replication or Reproduction) of XYZ. Please add the term partial to your title if only some of the original work could be replicated/reproduced.

  • WHO: name the original authors (and paper) and the authors that performed the replication/reproduction. Include contact information and mark one author as the corresponding author
  • WHERE: include a web link to a publicly available URL directory containing (a) the original paper (that is being reproduced) and (b) any subsequent paper(s)/documents/reports that do the reproduction
  • WHAT: describe the “thing” being replicated/reproduced
  • WHY: clearly state why that “thing” is interesting/important
  • HOW: say how it was done first
  • STATE: describe the replication/reproduction. If the replication/reproduction was only partial, then explain what parts could be achieved or had to be missed
  • DISCUSSION: What aspects of this “thing” made it easier/harder to replicate/reproduce? What are the lessons learned from this work that would enable more replication/reproduction in the future for other kinds of tasks or other kinds of research?

Two reviewers will review each abstract, possibly reaching out to the authors of the abstract or original paper. Abstracts will be ranked as follows

  • If the reviewers do not find sufficient substantive evidence for replication/reproduction, the abstract will be rejected
  • Any abstract that is judged to be unnecessarily critical of prior work will be rejected (*)
  • The remaining abstracts will be sorted according to (a) how interesting they are to the community and (b) their correctness
  • The top-ranked abstracts will be invited to give lightning talks

(*) Our goal is to foster a positive environment that supports and rewards researchers for conducting replications and reproductions. To that end, we require that all abstracts and presentations pay due respect to the work they are reproducing/replicating. Criticism of prior work is acceptable only as part of a balanced and substantive discussion of prior accomplishments.

Artifact Evaluation Track Co-Chairs

Ivano Malavolta, Vrije Universiteit Amsterdam, the Netherlands
Adel Noureddine, University of Pau and Pays de l'Adour, France