Write a Blog >>
ISSTA 2021
Sun 11 - Sat 17 July 2021 Online
co-located with ECOOP and ISSTA 2021

The Artifact Evaluation process is a service provided by the community to help authors of accepted papers provide more substantial supplements to their papers so that future researchers can more effectively build on and compare with previous work.

Accepted Artifacts

Title
AdvDoor: Adversarial Backdoor Attack of Deep Learning System
Artifact Evaluation
Artifact for #468: TERA: Optimizing Stochastic Regression Tests in Machine Learning Projects
Artifact Evaluation
Automated Patch Backporting in Linux
Artifact Evaluation
Boosting Symbolic Execution via Constraint Solving Time Prediction
Artifact Evaluation
Challenges and Opportunities: An In-depth Empirical Study on Configuration Error Injection Testing
Artifact Evaluation
DeepCrime: Mutation Testing of Deep Learning Systems based on Real Faults
Artifact Evaluation
DeepHyperion: Exploring the Feature Space of Deep Learning-Based Systems through Illumination Search
Artifact Evaluation
Deep Just-in-Time Defect Prediction: How Far Are We?
Artifact Evaluation
DialTest: Automated Testing for Recurrent-Neural-Network-Driven Dialogue Systems
Artifact Evaluation
Efficient White-box Fairness Testing through Gradient Search
Artifact Evaluation
Empirical Evaluation of Smart Contract Testing: What Is the Best Choice?
Artifact Evaluation
Exposing Previously Undetectable Faults in Deep Neural Networks
Artifact Evaluation
Finding Data Compatibility Bugs with JSON Subschema Checking
Artifact Evaluation
Fixing Dependency Errors for Python Build Reproducibility
Artifact Evaluation
Gramatron: Effective Grammar-aware Fuzzing
Artifact Evaluation
Grammar-Agnostic Symbolic Execution by Token Symbolization
Artifact Evaluation
Model-Based Testing of Networked Applications
Artifact Evaluation
DOI Pre-print
ModelDiff: Testing-based DNN Similarity Comparison for Model Reuse Detection
Artifact Evaluation
QFuzz: Quantitative Fuzzing for Side Channels
Artifact Evaluation
Link to publication DOI Pre-print
Runtime Detection of Memory Errors with Smart Status
Artifact Evaluation
Seed Selection for Successful Fuzzing
Artifact Evaluation
Semantic Matching of GUI Events for Test Reuse: Are We There Yet?
Artifact Evaluation
Test-Case Prioritization for Configuration Testing
Artifact Evaluation
The Impact of Tool Configuration Spaces on the Evaluation of Configurable Taint Analysis for Android
Artifact Evaluation
Toward Optimal MC/DC Test Case Generation
Artifact Evaluation
Type and Interval aware Array Constraint Solving for Symbolic Execution
Artifact Evaluation
Understanding and Finding System Setting-Related Defects in Android Apps
Artifact Evaluation
Validating Static Warnings via Testing Code Fragments
Artifact Evaluation
WebEvo: Taming Web Application Evolution via Detecting Semantic Structure Change
Artifact Evaluation

Call for Reviewers

The ISSTA 2021 Artifact Evaluation (AE) chairs are seeking self-nominations for reviewers willing to serve on the AE review committee (AEC). Reviewers are typically senior PhD students or postdocs and have participated in the AE process as authors or reviewers - although neither is a hard requirement. We expect reviewers to review 2-3 artifacts in a period between April 30th, 2021 to June 11th, 2021. Note that a self-nomination is no guarantee that you will be invited to serve on the AEC.

Please nominate yourself by submitting the following form: https://docs.google.com/forms/d/1a5XpN1U4hGUKBJHD77F0JIiv8qFCJLfCaxRL_d-56yE.

Deadline for application is February 15th 2021.

Decisions will be sent at the beginning of March 2021.

Call for Artifacts

The goal of the artifact evaluation is to foster reproducibility and reusability. Reproducibility refers to researchers or practitioners being able to validate the paper’s results using the provided artifact. Reusability means that researchers can use the artifact in a different context, for a different use case, or to build on and extend the artifact. Overall, the artifact evaluation process allows our field to progress by incentivizing and supporting authors to make their artifacts openly available and improve their quality. See the ACM guidelines on Artifact Review and Badging Version.

Submission and Preparation Overview

The following instructions provide an overview of how to prepare an artifact for submission. Please also read the instructions and explanations in the subsequent sections on this page before submission.

  1. Prepare your artifact as well as a README file (with a .txt, .md, or .html extension) with the following two sections:
    • Getting Started, to demonstrate how to set up the artifact and validate its general functionality (e.g., based on a small example) in less than 30 min.
    • Detailed Description, to describe how to validate the paper’s claims and results in detail.
  2. Upload the artifact to Zenodo to acquire a DOI.
  3. Submit the DOI and additional information about the artifact using HotCRP.

The Artifact Evaluation Process

The following provides a detailed explanation of the scope of artifacts, the goal of the evaluation process, and the submission instructions.

Scope of Artifacts

Artifacts can be a variety of different types (but are not limited to):

  • Tools, which are standalone systems.
  • Data repositories storing, for example, logging data, system traces, or survey raw data.
  • Frameworks or libraries, which are reusable components.
  • Machine-readable proofs (see the guide on Proof Artifacts by Marianna Rapoport)

If you are in doubt whether your artifact can be submitted to the AE process, please contact the AE chairs.

Evaluation Objectives and Badging

The evaluation of the artifacts subsequently target three different objectives:

  • Artifact Available v.1.1 Availability: The artifact should be available and accessible for everyone interested in inspecting or using it. As detailed below, an artifact has to be uploaded to Zenodo to obtain this badge.
  • Artifact Evaluated - Functional Functionality: The main claims of the paper should be backed up by the artifact.
  • Artifact Evaluated - Reusable Reusability: Other researchers or practitioners should be able to inspect, understand, and extend the artifact.

Each of the different objectives is handled as part of the evaluation process with each successful outcome awarded with an ACM badge.

Availability

Your artifact should be made available via Zenodo, a publicly-funded platform aiming to support open science. The artifact needs to be self-contained. During upload, you will be required to select a license and provide additional information, such as a description of the artifact. Zenodo will generate a DOI that is necessary for the artifact evaluation submission (HotCRP). Note that the artifact is immediately public and can no longer be modified or deleted. However, it is possible to upload an updated version of the artifact that receives a new DOI (e.g., to address reviewer comments during the kick-the-tires response phase).

The default storage for Zenodo is currently limited to 50GB per artifact but can be extended on request Zenodo FAQ - Section Policies. Still, please keep the size reasonably small to support reviewers in the process.

Functionality

To judge the functionality and reusability of an artifact, two to three reviewers will evaluate every submission. The process happens in two stages. First, reviewers will check for the artifact’s basic functionality (as part of a kick-the-tires phase) and will communicate potential issues to the authors that they can fix or respond to (as part of a kick-the-tires response phase). Second, reviewers will evaluate the artifact in detail and validate that it backs up the paper’s important claims.

The README file has to account for these two phases and should be structured in two sections.

The Getting Started section has to describe:

  • the artifact’s requirements;
  • and detail the steps required to check for the basic functionality of the artifact.

For the requirements, please keep in mind that reviewers could use a different operating system and, in general, a different environment than yours. If you decide to, for example, submit only the source code of a tool, ensure that all the requirements are documented and widely available. If the artifact is a virtual machine or container, the instructions should contain detailed instructions on how to run the image or container.

To help reviewers validate your artifact’s basic functionality, describe which basic commands of your artifact to execute, how much time these commands will likely take, and what output to expect.

Overall, please ensure that the overall time to evaluate the Getting Started section does not exceed 30 minutes.

The Detailed Description section should present how to use the artifact to back up every claim and experiment described in the paper.

Those are the main requirements to achieve the “Artifacts Evaluated – Functional” badge.

Reusability

For the “Artifacts Evaluated - Reusable” badge, all requirements for the “Artifacts Evaluated - Functional” need to be met as a prerequisite. When submitting your artifact to HotCRP, you are required to argue if and why your artifact should receive an “Artifacts Evaluated – Reusable” badge. A typical reusable artifact is expected to correspond to one or multiple of the following characteristics:

  • The artifact is highly automated and easy to use.
  • It is comprehensively documented, and the documentation describes plausible scenarios on how it could be extended.
  • The artifact contains all means necessary such that others can extend it. For example, a tool artifact includes its source code, all not commonly available requirements, and a working description of compiling it. Container or virtual machines with all requirements are preferred.
  • The README should contain or point to other documentation that is part of the artifact and describes use case scenarios or details beyond the scope of the paper. Such documentation is not limited to text; for example, a video tutorial could demonstrate how the artifact could be used and evaluated more generally.

In general, the wide variety of artifacts makes it difficult to come up with an exact list of expectations. The points above should be seen as a guideline for authors and reviewers of what to provide and what to expect. In case of any doubt, feel free to contact the AEC.

Distinguished Artifact Awards

Artifacts that go above and beyond the expectations of the Artifact Evaluation Committee will receive a Distinguished Artifact Award.

FAQ

  • Is the reviewing process double-blind? No, the reviewing process is single-blind. The reviewers will know the authors’ identities, while reviewers’ identities are kept hidden from the authors. Authors can thus submit artifacts that reveal the authors’ identities.
  • How can we submit an artifact that contains private components (e.g., a commercial benchmark suite)? An option would be to upload only the public part of the artifact to Zenodo, and share a link to the private component that is visible only to the reviewers by specifying the link in the Bidding Instructions and Special Hardware Requirements HotCRP field. If this is not possible, another option would be to provide reviewers access to a machine that allows them to also interact with the artifact’s private component. Both options must adhere to the single-blind reviewing process (i.e., they must not reveal the reviewers’ identities). Whether an “Availability” flag will be awarded for partially available artifacts will be determined based on the AEC’s evaluation.

Contact

If you have any questions or comments, please reach out to the Artifact Evaluation Chairs at issta21-artifacts@googlegroups.com.

Questions? Use the ISSTA Artifact Evaluation contact form.