ISSTA 2022
Mon 18 - Fri 22 July 2022 Online

The Artifact Evaluation process is a service provided by the community to help authors of accepted papers provide more substantial supplements to their papers so that future researchers can more effectively build on and compare with previous work.

Call for Artifacts

The deadline for artifact submission has been extended. Please see “Important Dates” for the new dates.

The goal of the artifact evaluation is to foster reproducibility and reusability. Reproducibility refers to researchers or practitioners being able to validate the paper’s results using the provided artifact. Reusability means that researchers can use the artifact in a different context, for a different use case, or to build on and extend the artifact. Overall, the artifact evaluation process allows our field to progress by incentivizing and supporting authors to make their artifacts openly available and improve their quality. See the ACM guidelines on Artifact Review and Badging Version.

Submission and Preparation Overview

The following instructions provide an overview of how to prepare an artifact for submission. Please also read the instructions and explanations in the subsequent sections on this page before submission.

  1. Prepare your artifact as well as a README file (with a .txt, .md, or .html extension) with the following two sections:
    • Getting Started, to demonstrate how to set up the artifact and validate its general functionality (e.g., based on a small example) in less than 30 min.
    • Detailed Description, to describe how to validate the paper’s claims and results in detail.
  2. Upload the artifact to Zenodo to acquire a DOI.
  3. Submit the DOI and additional information about the artifact using HotCRP.

The Artifact Evaluation Process

The following provides a detailed explanation of the scope of artifacts, the goal of the evaluation process, and the submission instructions.

Scope of Artifacts

Artifacts can be a variety of different types (but are not limited to):

  • Tools, which are standalone systems.
  • Data repositories storing, for example, logging data, system traces, or survey raw data.
  • Frameworks or libraries, which are reusable components.
  • Machine-readable proofs (see the guide on Proof Artifacts by Marianna Rapoport)

If you are in doubt whether your artifact can be submitted to the AE process, please contact the AE chairs.

Evaluation Objectives and Badging

The evaluation of the artifacts subsequently target three different objectives:

  • Artifact Available v.1.1 Availability: The artifact should be available and accessible for everyone interested in inspecting or using it. As detailed below, an artifact has to be uploaded to Zenodo to obtain this badge.
  • Artifact Evaluated - Functional Functionality: The main claims of the paper should be backed up by the artifact.
  • Artifact Evaluated - Reusable Reusability: Other researchers or practitioners should be able to inspect, understand, and extend the artifact.

Each of the different objectives is handled as part of the evaluation process with each successful outcome awarded with an ACM badge.


Your artifact should be made available via Zenodo, a publicly-funded platform aiming to support open science. The artifact needs to be self-contained. During upload, you will be required to select a license and provide additional information, such as a description of the artifact. Zenodo will generate a DOI that is necessary for the artifact evaluation submission (HotCRP). Note that the artifact is immediately public and can no longer be modified or deleted. However, it is possible to upload an updated version of the artifact that receives a new DOI (e.g., to address reviewer comments during the kick-the-tires response phase).

The default storage for Zenodo is currently limited to 50GB per artifact but can be extended on request Zenodo FAQ - Section Policies. Still, please keep the size reasonably small to support reviewers in the process.


To judge the functionality and reusability of an artifact, two to three reviewers will evaluate every submission. The process happens in two stages. First, reviewers will check for the artifact’s basic functionality and will communicate potential issues to the authors that they can fix or respond to (as part of the response phase). Second, reviewers will evaluate the artifact in detail and validate that it backs up the paper’s important claims.

The README file has to account for these two phases and should be structured in two sections.

The Getting Started section has to describe:

  • the artifact’s requirements;
  • and detail the steps required to check for the basic functionality of the artifact.

For the requirements, please keep in mind that reviewers could use a different operating system and, in general, a different environment than yours. If you decide to, for example, submit only the source code of a tool, ensure that all the requirements are documented and widely available. If the artifact is a virtual machine or container, the instructions should contain detailed instructions on how to run the image or container.

To help reviewers validate your artifact’s basic functionality, describe which basic commands of your artifact to execute, how much time these commands will likely take, and what output to expect.

Overall, please ensure that the overall time to evaluate the Getting Started section does not exceed 30 minutes.

The Detailed Description section should present how to use the artifact to back up every claim and experiment described in the paper.

Those are the main requirements to achieve the “Artifacts Evaluated – Functional” badge.


For the “Artifacts Evaluated - Reusable” badge, all requirements for the “Artifacts Evaluated - Functional” need to be met as a prerequisite. When submitting your artifact to HotCRP, you are required to argue if and why your artifact should receive an “Artifacts Evaluated – Reusable” badge. A typical reusable artifact is expected to correspond to one or multiple of the following characteristics:

  • The artifact is highly automated and easy to use.
  • It is comprehensively documented, and the documentation describes plausible scenarios on how it could be extended.
  • The artifact contains all means necessary such that others can extend it. For example, a tool artifact includes its source code, all not commonly available requirements, and a working description of compiling it. Container or virtual machines with all requirements are preferred.
  • The README should contain or point to other documentation that is part of the artifact and describes use case scenarios or details beyond the scope of the paper. Such documentation is not limited to text; for example, a video tutorial could demonstrate how the artifact could be used and evaluated more generally.

In general, the wide variety of artifacts makes it difficult to come up with an exact list of expectations. The points above should be seen as a guideline for authors and reviewers of what to provide and what to expect. In case of any doubt, feel free to contact the AEC.

Distinguished Artifact Awards

Artifacts that go above and beyond the expectations of the Artifact Evaluation Committee will receive a Distinguished Artifact Award.


  • Is the reviewing process double-blind? No, the reviewing process is single-blind. The reviewers will know the authors’ identities, while reviewers’ identities are kept hidden from the authors. Authors can thus submit artifacts that reveal the authors’ identities.
  • How can we submit an artifact that contains private components (e.g., a commercial benchmark suite)? An option would be to upload only the public part of the artifact to Zenodo, and share a link to the private component that is visible only to the reviewers by specifying the link in the Bidding Instructions and Special Hardware Requirements HotCRP field. If this is not possible, another option would be to provide reviewers access to a machine that allows them to also interact with the artifact’s private component. Both options must adhere to the single-blind reviewing process (i.e., they must not reveal the reviewers’ identities). Whether an “Availability” flag will be awarded for partially available artifacts will be determined based on the AEC’s evaluation.


If you have any questions or comments, please reach out to the Artifact Evaluation Chairs.

Call for Reviewers

Please use the Nomination Form to nominate yourself or a colleague for the ISSTA 2022 Artifact Evaluation Program Committee.