Write a Blog >>
CC 2021
Tue 2 - Wed 3 March 2021 Online Conference

In order to promote reproducibility in our research, authors of accepted papers will be invited to submit supporting materials to the Artifact Evaluation process. The Artifact Evaluation process is run by a separate committee whose task is to reproduce (at least some) experiments and assess how the artifacts support the work described in the papers.

Artifact submission will not influence the final decision regarding the papers, and papers that go through the Artifact Evaluation process successfully will receive badges assessing the level of reproducibility of the work (more information on the subject available on the ACM website: https://www.acm.org/publications/policies/artifact-review-badging).

We would also encourage authors to make supporting materials publicly available, for example by including them as “source materials” in the ACM Digital Library.

Authors of accepted CC 2021 papers are invited to formally submit their supporting materials to the Artifact Evaluation (AE) process. The Artifact Evaluation Committee attempts to reproduce (at least the main) experiments and assesses if submitted artifacts support the claims made in the paper. The submission is voluntary and does not influence the final decision regarding paper acceptance.

We invite every author of an accepted CC paper to consider submitting an artifact. At CC we follow ACM’s artifact reviewing and badging policy. ACM describes a research artifact as follows:

By “artifact” we mean a digital object that was either created by the authors to be used as part of the study or generated by the experiment itself. For example, artifacts can be software systems, scripts used to run experiments, input datasets, raw data collected in the experiment, or scripts used to analyze results.

Submission

Authors must submit:

For the artifact itself, we encourage the use of container or VM technologies like Docker, Singularity, Virtual Box or Vagrant to package the artifact in one stand-alone container or VM which provides all required dependencies. Giving AE reviewers remote access to your machines with preinstalled (proprietary) software is also possible.

If you have an unusual experimental setup that requires specific hardware (i.e., custom hardware, oscilloscopes for measurements …) or proprietary software please contact the artifact evaluation chairs before the submission.

There are more tips for preparing a submission available on the ctuning website.

Evaluation Process

Each submitted artifact is evaluated by at least two members of the artifact evaluation committee.

During the process authors and evaluators are allowed to anonymously communicate with each other to overcome technical difficulties.
Ideally, we hope to see all submitted artifacts to successfully pass artifact evaluation.

The evaluators are asked to evaluate the artifact based on the following criteria, that are defined by ACM.

Is the artifact functional?

  • Package complete? Are all components relevant to evaluation included in the package?
  • Well documented? Is the documentation enough to understand, install, and evaluate the artifact?
  • Exercisable? Does it include scripts and/or software to perform appropriate experiments and generate results?
  • Consistent? Are artifacts relevant to the associated paper and contribute in some inherent way to the generation of its main results?

The artifacts associated with the paper will receive an “Artifacts Evaluated - Functional” badge only if they are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.

Is the artifact customizable and reusable?

  • Can this artifact and experimental workflow be easily reused and customized?
    For example, can it be used on a different platform, with different benchmarks, data sets, compilers, tools, under different conditions and parameters, etc.?

The artifacts associated with the paper will receive an “Artifact Evaluated - Reusable” badge only if they are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated - Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing are facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.

Have the results been validated?

  • Can all main results from the paper be validated using provided artifacts?
    Evaluators are asked to report any unexpected artifact behavior (depends on the type of artifact such as unexpected output, scalability issues, crashes, performance variation, etc).

The artifacts associated with the paper will receive a “Results replicated” badge only if the main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author. Note that variation of empirical and numerical results is tolerated. In fact, it is often unavoidable in computer systems research - see “how to report and compare empirical results?” in AE FAQ on ctuning.org!

Based on the results, the following badges are awarded.

Badges

ACM recommends awarding three different types of badges to communicate how the artifact has been evaluated. A single paper can receive up to three badges — one badge of each type.

available badge The green Artifacts Available badge indicates that an artifact is publicly accessible in an archival repository. For this badge to be awarded the paper does not have to be independently evaluated. ACM requires that a qualified archival repository is used, for example, Zenodo, figshare, Dryad. Personal webpages, GitHub repositories, or alike are not sufficient as it can be changed after the submission deadline!
functional badge evaluated badge The red Artifacts Evaluated badges indicate that a research artifact has been successfully completed an independent audit. A reviewer has verified that the artifact is documented, complete, consistent, exercisable, and include appropriate evidence of verification and validation. Two levels are distinguished:

The lighter red Artifacts Evaluated — Functional badge indicates a basic level of functionality. The darker red Artifacts Evaluated — Reusable badge indicates a higher quality artifact which significantly exceeds minimal functionality so that reuse and repurposing are facilitated.

Artifacts need not be made publicly available to be considered for one of these badges. However, they do need to be made available to reviewers.
replicated badge reproduced badge The blue Results Validated badges indicate that the main results of the paper have been successfully obtained by an independent reviewer. Two levels are distinguished:

The darker blue Results Reproduced badge indicates that the main results of the paper have been successfully obtained using the provided artifact. The lighter blue Results Replicated badge indicates that the main results of the paper have been independently obtained without using the author-provided research artifact.

Artifacts need not be made publicly available to be considered for one of these badges. However, they do need to be made available to reviewers.

At CC the artifact evaluation committee awards for each successfully evaluated paper, one of the two red Artifacts Evaluated badges as well as the darker blue Results Reproduced badge. We do not award the lighter blue Results Replicated badge in this artifact evaluation process. The green Artifact Available badge does not require the formal audit and, therefore, is awarded directly by the publisher — if the authors provide a link to the deposited artifact.

Dates
Tracks
You're viewing the program in a time zone which is different from your device's time zone - change time zone

Tue 2 Mar
Times are displayed in time zone: Eastern Time (US & Canada) change

12:15 - 12:30
CC OpeningCC Research Papers at CC Virtual Room
Chair(s): Aaron SmithUniversity of Edinburgh; Microsoft, Rajiv GuptaUC Riverside, Delphine DemangeUniv Rennes, Inria, CNRS, IRISA
12:30 - 13:15
IR DesignCC Research Papers at CC Virtual Room
Chair(s): Albert CohenGoogle
12:30
15m
Talk
Data-Aware Process Networks
CC Research Papers
Christophe AliasCNRS; ENS Lyon; Inria; University of Lyon, Alexandru PlescoXtremLogic
12:45
15m
Talk
Integrating a Functional Pattern-Based IR into MLIRArtifacts Evaluated – Functional v1.1Results Reproduced v1.1Artifacts Available v1.1
CC Research Papers
Martin LückeUniversity of Edinburgh, Michel SteuwerUniversity of Edinburgh, Aaron SmithUniversity of Edinburgh; Microsoft
13:00
15m
Talk
Compiling Data-Parallel Datalog
CC Research Papers
Thomas GilrayUniversity of Alabama at Birmingham, Sidharth KumarUniversity of Alabama at Birmingham, Kristopher MicinskiSyracuse University
13:15 - 13:30
13:30 - 14:15
OptimizationCC Research Papers at CC Virtual Room
Chair(s): Christophe DubachMcGill University
13:30
15m
Talk
PGZ: Automatic Zero-Value Code Specialization
CC Research Papers
13:45
15m
Talk
Exploring the Space of Optimization Sequences for Code-Size Reduction: Insights and ToolsArtifacts Evaluated – Reusable v1.1Results Reproduced v1.1Artifacts Available v1.1
CC Research Papers
Anderson Faustino da SilvaState University of Maringá, Bernardo N. B. de LimaFederal University of Minas Gerais, Fernando Magno Quintão PereiraFederal University of Minas Gerais
14:00
15m
Talk
PolyBench/Python: Benchmarking Python Environments with Polyhedral OptimizationsArtifacts Evaluated – Reusable v1.1Artifacts Available v1.1
CC Research Papers
Miguel Á. Abella-GonzálezUniversidade da Coruña, Pedro Carollo-FernándezUniversidade da Coruña, Louis-Noël PouchetColorado State University, Fabrice RastelloInria, Gabriel RodríguezUniversidade da Coruña
14:30 - 15:30
CC Business MeetingCC Research Papers at CC Virtual Room
14:30
60m
Meeting
CC Business Meeting
CC Research Papers

Wed 3 Mar
Times are displayed in time zone: Eastern Time (US & Canada) change

10:00 - 10:45
Safety & CorrectnessCC Research Papers at CC Virtual Room
Chair(s): Jan VitekNortheastern University / Czech Technical University
10:00
15m
Talk
A Modern Compiler for the French Tax CodeArtifacts Evaluated – Reusable v1.1Results Reproduced v1.1Artifacts Available v1.1
CC Research Papers
Denis MerigouxInria, Raphaël MonatSorbonne University; CNRS; LIP6, Jonathan ProtzenkoMicrosoft Research
10:15
15m
Talk
NSan: A Floating-Point Numerical Sanitizer
CC Research Papers
Clement CourbetGoogle Research
10:30
15m
Talk
Communication-Safe Web Programming in TypeScript with Routed Multiparty Session TypesArtifacts Evaluated – Reusable v1.1Results Reproduced v1.1Artifacts Available v1.1
CC Research Papers
Anson MiuImperial College London; Bloomberg, Francisco FerreiraImperial College London, Nobuko YoshidaImperial College London, Fangyi ZhouImperial College London
Pre-print Media Attached
10:45 - 11:10
11:10 - 11:55
Code Generation & Binary AnalysisCC Research Papers at CC Virtual Room
Chair(s): Bernhard EggerSeoul National University
11:10
15m
Talk
Helper Function Inlining in Dynamic Binary Translation
CC Research Papers
Wenwen WangUniversity of Georgia
11:25
15m
Talk
Lightning BOLT: Powerful, Fast, and Scalable Binary OptimizationArtifacts Evaluated – Functional v1.1Results Reproduced v1.1Artifacts Available v1.1
CC Research Papers
Maksim PanchenkoFacebook, Rafael AulerFacebook, Laith SakkaPurdue University, Guilherme OttoniFacebook
11:40
15m
Talk
Compact Native Code Generation for Dynamic Languages on Micro-core Architectures
CC Research Papers
Maurice JamiesonUniversity of Edinburgh, Nick BrownUniversity of Edinburgh
11:55 - 12:30
12:30 - 13:00
Natural & Source Language AnalysisCC Research Papers at CC Virtual Room
Chair(s): Zhijia ZhaoUC Riverside
12:30
15m
Talk
Deep NLP-Based Co-evolvement for Synthesizing Code Analysis from Natural Language
CC Research Papers
Zifan NanNorth Carolina State University, Hui GuanUniversity of Massachusetts at Amherst, Xipeng ShenNorth Carolina State University, Chunhua LiaoLawrence Livermore National Laboratory
12:45
15m
Talk
Resolvable Ambiguity: Principled Resolution of Syntactically Ambiguous ProgramsArtifacts Evaluated – Reusable v1.1Results Reproduced v1.1Artifacts Available v1.1
CC Research Papers
13:00 - 13:15
CC ClosingCC Research Papers at CC Virtual Room
Chair(s): Aaron SmithUniversity of Edinburgh; Microsoft, Delphine DemangeUniv Rennes, Inria, CNRS, IRISA, Rajiv GuptaUC Riverside
13:15 - 13:30

Call for Artifacts

Authors of accepted CC 2021 papers are invited to formally submit their supporting materials to the Artifact Evaluation (AE) process. The Artifact Evaluation Committee attempts to reproduce (at least the main) experiments and assesses if submitted artifacts support the claims made in the paper. The submission is voluntary and does not influence the final decision regarding paper acceptance.

We invite every author of an accepted CC paper to consider submitting an artifact. At CC we follow ACM’s artifact reviewing and badging policy. ACM describes a research artifact as follows:

By “artifact” we mean a digital object that was either created by the authors to be used as part of the study or generated by the experiment itself. For example, artifacts can be software systems, scripts used to run experiments, input datasets, raw data collected in the experiment, or scripts used to analyze results.

Submission

Authors must submit:

For the artifact itself, we encourage the use of container or VM technologies like Docker, Singularity, Virtual Box or Vagrant to package the artifact in one stand-alone container or VM which provides all required dependencies. Giving AE reviewers remote access to your machines with preinstalled (proprietary) software is also possible.

If you have an unusual experimental setup that requires specific hardware (i.e., custom hardware, oscilloscopes for measurements …) or proprietary software please contact the artifact evaluation chairs before the submission.

There are more tips for preparing a submission available on the ctuning website.

Evaluation Process

Each submitted artifact is evaluated by at least two members of the artifact evaluation committee.

During the process authors and evaluators are allowed to anonymously communicate with each other to overcome technical difficulties.
Ideally, we hope to see all submitted artifacts to successfully pass artifact evaluation.

The evaluators are asked to evaluate the artifact based on the following criteria, that are defined by ACM.

Is the artifact functional?

  • Package complete? Are all components relevant to evaluation included in the package?
  • Well documented? Is the documentation enough to understand, install, and evaluate the artifact?
  • Exercisable? Does it include scripts and/or software to perform appropriate experiments and generate results?
  • Consistent? Are artifacts relevant to the associated paper and contribute in some inherent way to the generation of its main results?

The artifacts associated with the paper will receive an “Artifacts Evaluated - Functional” badge only if they are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.

Is the artifact customizable and reusable?

  • Can this artifact and experimental workflow be easily reused and customized?
    For example, can it be used on a different platform, with different benchmarks, data sets, compilers, tools, under different conditions and parameters, etc.?

The artifacts associated with the paper will receive an “Artifact Evaluated - Reusable” badge only if they are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated - Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing are facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.

Have the results been validated?

  • Can all main results from the paper be validated using provided artifacts?
    Evaluators are asked to report any unexpected artifact behavior (depends on the type of artifact such as unexpected output, scalability issues, crashes, performance variation, etc).

The artifacts associated with the paper will receive a “Results replicated” badge only if the main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author. Note that variation of empirical and numerical results is tolerated. In fact, it is often unavoidable in computer systems research - see “how to report and compare empirical results?” in AE FAQ on ctuning.org!

Based on the results, the following badges are awarded.

Badges

ACM recommends awarding three different types of badges to communicate how the artifact has been evaluated. A single paper can receive up to three badges — one badge of each type.

available badge The green Artifacts Available badge indicates that an artifact is publicly accessible in an archival repository. For this badge to be awarded the paper does not have to be independently evaluated. ACM requires that a qualified archival repository is used, for example, Zenodo, figshare, Dryad. Personal webpages, GitHub repositories, or alike are not sufficient as it can be changed after the submission deadline!
functional badge evaluated badge The red Artifacts Evaluated badges indicate that a research artifact has been successfully completed an independent audit. A reviewer has verified that the artifact is documented, complete, consistent, exercisable, and include appropriate evidence of verification and validation. Two levels are distinguished:

The lighter red Artifacts Evaluated — Functional badge indicates a basic level of functionality. The darker red Artifacts Evaluated — Reusable badge indicates a higher quality artifact which significantly exceeds minimal functionality so that reuse and repurposing are facilitated.

Artifacts need not be made publicly available to be considered for one of these badges. However, they do need to be made available to reviewers.
replicated badge reproduced badge The blue Results Validated badges indicate that the main results of the paper have been successfully obtained by an independent reviewer. Two levels are distinguished:

The darker blue Results Reproduced badge indicates that the main results of the paper have been successfully obtained using the provided artifact. The lighter blue Results Replicated badge indicates that the main results of the paper have been independently obtained without using the author-provided research artifact.

Artifacts need not be made publicly available to be considered for one of these badges. However, they do need to be made available to reviewers.

At CC the artifact evaluation committee awards for each successfully evaluated paper, one of the two red Artifacts Evaluated badges as well as the darker blue Results Reproduced badge. We do not award the lighter blue Results Replicated badge in this artifact evaluation process. The green Artifact Available badge does not require the formal audit and, therefore, is awarded directly by the publisher — if the authors provide a link to the deposited artifact.