Sun 22 - Fri 27 September 2024 Linz, Austria


MODELS will once again implement a separate evaluation process to assess the quality of the artifacts supporting the work presented in accepted papers. The purpose of the artifact evaluation process is to acknowledge the considerable effort required to obtain high-quality artifacts, to foster a culture of experimental reproducibility, and to provide a peer review and archiving process for artifacts analogous to that of research papers. The goal of artifact archiving is to ensure that the artifacts stay available for a long time, that they can be located easily, and can be reused by other researchers. Additionally, archiving allows designating exactly the version of the artifact that was used to produce the research results.

We focus on assessing the artifacts themselves and helping to improve them rather than evaluating the quality of the research linked to the artifact. This process assumes that the quality of research has been already assessed and approved for MODELS by the respective program committees. Thus, the main goal of our review process is constructive: to improve the submitted artifacts, not to reject or filter them. An artifact evaluation rejection may happen if we determine that improving the artifact to sufficient quality is impossible in the given time frame, the artifact is not consistent with the paper’s results, or the artifact itself is not of sufficient relevance to the scope of the main research paper or to the MODELS community at large.

To summarize, a good artifact is:

  • Consistent with the paper,
  • As complete as possible,
  • Well-documented,
  • Easy to (re)use,
  • Publicly available and archived.


We follow the “Artifact Review and Badging Version 1.1” policy provided by the ACM, which defines three types of badges. For convenience we summarize these badges below, but please refer to the policy itself for exact definitions.

Note that only the Artifact Evaluated and Artifacts Available Badges can be claimed before publication of your paper, through the process described in the email you will receive when your paper is accepted. The Results Validated Badge requires yet-to-be peer-reviewed publications that replicate/reproduce the results of the paper, and therefore can only be obtained after publication of the paper.

Artifact Evaluated Badge

This badge is applied to papers whose associated artifacts have successfully completed an independent audit. Artifacts need not be made publicly available to be considered for this badge. However, they do need to be made available to reviewers. Two levels are distinguished, only one of which should be applied in any instance:

Artifact Evaluated − Functional The artifacts associated with the research are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.
Artifact Evaluated − Reusable The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated − Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.

Artifacts Available Badge

This badge is applied to papers in which associated artifacts have been made permanently available for retrieval.

Artifacts Available Author-created artifacts relevant to this paper have been placed on a publicly accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided.

Results Validated Badge

This badge is applied to papers in which the main results of the paper have been successfully obtained by a person or team other than the author. Two levels are distinguished:

Results Validated − Reproduced Τhe main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts.
Results Validated − Replicated Τhe main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.

Note that, as explained above, this badge cannot be claimed before the publication of your paper, as it requires yet-to-be peer-reviewed publications that replicate/reproduce the results of the paper. Hence, to claim a post-publication Results Validated Badge in the future, please send an email to one of MODELS 2024 Artifact Evaluation chairs (even if they technically won’t be acting as chairs anymore). Your case will then be evaluated ad-hoc directly by the chairs.

Submission Guidelines

Submission process

If and when your paper has been accepted for MODELS 2024, you will be invited by the AEC chairs to submit the artifacts related to your work. This invitation will contain detailed instructions on how to submit your artifacts. As explained in the About page, we follow the “Artifact Review and Badging Version 1.1” policy provided by the ACM.

For the reusable and available badges, authors must offer “download information” showing how reviewers can access and easily execute (if appropriate) their artifact. The authors need to make the packaged artifact (installation package or simple package) available so that the Evaluation Committee can access it. We suggest a link to a public repository or to a single archive file in a widely available archive format.

If the authors are aiming for the badges “available” and beyond the artifact needs to be publicly accessible (a DOI is required!). In other cases, the artifacts do not necessarily have to be publicly accessible for the review process. In this case, the authors are asked to provide a private link or a password-protected link. In either case, we encourage authors to ensure that artifacts can be accessed with link only (e.g., no registration is necessary). It is worth noting that for the “available” badge a DOI is required. Github/Gitlab are not archival repositories, as required by ACM. However, there is an easy-to-use Github-to-Zenodo service.

The authors need to write and submit documentation explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in more detail. The artifact submission must only describe the technicalities of the artifacts and uses of the artifact that are not already described in the paper.

The submission should contain the following documents (either plain text − using a markup language such as Markdown, Asciidoc or reStructuredText − or pdf format) in a zip archive:

  • A README main file describing what the artifact does and where it can be obtained (with hidden links and access password if necessary). Also, there should be a clear description, of how to repeat/replicate/reproduce the results presented in the paper. Artifacts that focus on data should, in principle, cover aspects relevant to understanding the context, data provenance, ethical and legal statements (as long as relevant), and storage requirements. Artifacts that focus on software should, in principle, cover aspects relevant to how to install and use it (and be accompanied by a small example).
  • A REQUIREMENTS file for artifacts that focus on software. This file should, in principle, cover aspects of hardware environment requirements (e.g., performance, storage, or non-commodity peripherals) and software environments (e.g., Docker, VM, and operating system. When relevant, it is strongly encouraged to also provide a configuration file for a dependency management system (eg. package.json, pom.xml, requirements.txt, Cargo.toml), or a container build file (eg. Dockerfile/Containerfile). Any deviation from standard environments needs to be reasonably justified. It is strongly recommended to make the execution as smooth as possible for the evaluation by the artifact evaluation committee.
  • A STATUS file stating what kind of badge(s) the authors are applying for as well as the reasons why the authors believe that the artifact deserves that badge(s).
  • A LICENSE file describing the distribution rights. Note that to score “available” or higher, then that license needs to be some form of open-source license. Details are also under the respective badges and the open science policies as adopted by ACM SIGSOFT. A copy of the accepted paper in PDF format.

Evaluation Process

Each submitted artifact will be evaluated by at least two members of the AEC. Thereby, the artifacts will be treated as confidential, as with the submitted paper. The evaluation consists of two steps:

  • Kicking-the-tires: Reviewers will check the artifact’s integrity and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). In case of any problems, authors will be given 4 days to read and respond to the kick-the-tires reports of their artifacts and solve any issues preventing the artifact evaluation.
  • Artifact assessment: Reviewers evaluate the artifacts and decide on the approval of the artifact.

As the artifact evaluation notification will be after the camera-ready deadline, we will ensure that the published article will carry the corresponding ACM Artifact Evaluation Badge. Moreover, we advise the authors to provide a stable link to your artifact already in your camera-ready version, for instance, with a DOI link to a Zenodo repository.

Questions? Use the MODELS Artifact Evaluation contact form.