Traditionally, technical research papers are published without including any artifacts (such as tools, data, models, videos, etc.), even though the artifacts may serve as crucial and detailed evidence for the quality of the results that the associated paper offers. They support the repeatability of experiments and precise comparison with alternative approaches, thus enabling higher quality in the research area as a whole. They may also make it easier for other researchers to perform their own experiments, thus helping the original authors disseminating their ideas in detail. Hence, artifacts should be taken seriously and recognized separately.
The AE process at ECOOP 2016 is a continuation of the AE process at ECOOP 2016, ECOOP 2015, ECOOP 2014, ECOOP 2013, and several other conferences, including ESEC/FSE, OOPSLA, PLDI, ISSTA, HSCC, and SAS: see the authoritative Artifact Evaluation for Software Conferences web site.
Authors will be invited to archive their accepted artifacts on the new Dagstuhl Artifacts Series (DARTS) published in the Dagstuhl Research Online Publication Server (DROPS). Each artifact will be assigned a DOI, separate from the ECOOP companion paper, allowing the community to cite artifacts on their own.
- IceDust 2: Derived Bidirectional Relations and Calculation Strategy Composition (D. Harkes; E. Visser)
- A Capability-Based Module System for Authority Control (D. Melicher; Y. Shi; A. Potanin; J. Aldrich)
- A Linear Decomposition of Multiparty Sessions for Safe Distributed Programming (A. Scalas; O. Dardha; R. Hu; N. Yoshida)
- Concurrent Data Structures Linked in Time (G. Delbianco; I. Sergey; A. Nanevski; A. Banerjee)
- Strong Normalization for Dependent Object Types (DOT) (F. Wang; T. Rompf)
- Contracts in the Wild: A Study of Java Programs (J. Dietrich; D. Pearce; K. Jezek; P. Brada)
- Parallelizing Julia with a Non-invasive DSL (T. Anderson; H. Liu; L. Kuper; E. Totoni; J. Vitek; T. Shpeisman)
- Mixed Messages: Measuring Conformance and Non-Interference in TypeScript (J. Williams; J. Morris; P. Wadler; J. Zalewski)
- Type Abstraction for Relaxed Noninterference (R. Cruz; T. Rezk; B. Serpette; É. Tanter)
- EVF: An Extensible and Expressive Visitor Framework for Programming Language Reuse (W. Zhang; B. Oliveira)
- Mailbox Abstractions for Static Analysis of Actor Programs (Q. Stiévenart; J. Nicolay; W. De Meuter; C. De Roover)
- Data exploration through dot-driven development (T. Petricek)
- EvilPickles: DoS attacks based on Object-Graph Engineering (J. Dietrich; K. Jezek; S. Rasheed; A. Tahir; A. Potanin)
- Interprocedural Specialization of Higher-Order Dynamic Languages Without Static Analysis (B. Saleil; M. Feeley)
- Strong Logic for Weak Memory: Reasoning About Release-Acquire Consistency in Iris (J. Kaiser; H. Dang; D. Dreyer; O. Lahav; V. Vafeiadis)
- Proactive Synthesis of Recursive Tree-to-String Functions from Examples (M. Mayer; J. Hamza; V. Kuncak)
Call for Artifacts
Authors of accepted research papers at ECOOP 2017 can have their artifacts evaluated by an Artifact Evaluation Committee. Artifacts that live up to the expectations created by the paper will be marked with a badge in the proceedings. Furthermore, they will be invited for inclusion in the Dagstuhl Artifacts Series (DARTS) published in the Dagstuhl Research Online Publication Server (DROPS). Artifacts in DARTS are freely downloadable and ensure permanent and durable storage. As software projects are likely to evolve over time, archived artifacts provide a snapshot in time of the actual software/data that was used to create the paper: we expect this will simplify the job of independently repeating any experiments presented in the paper. Although there is no obligation for accepted artifacts to be included in DARTS, readers of accepted papers will greatly benefit from having access to those artifacts, and the attention that the authors’ work will get may likely increase if their artifacts are made publicly available. Artifacts that are deemed especially meritorious will be singled out for special recognition in the proceedings and at the conference.
The Artifact Evaluation process is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. The submission of an artifact is voluntary and will not influence the final decision regarding the papers (which is obviously enforced because the artifacts are submitted after the notification of acceptance has been sent out). Notification about the outcome of the Artifact Evaluation and reviews including suggestions for improving the artifacts will be distributed about two weeks before the deadline for the final version of the research paper, such that the outcome can be mentioned in the paper and the final artifact can be uploaded for inclusion in DARTS.
A submitted artifact should be consistent with the associated paper. It should be so well documented that it is accessible for a general computer scientist with an interest in the research area, who has read your paper.
A submitted artifact is treated as confidential, just like a submitted research paper. However, it is strongly recommended that artifacts are made available to the research community afterwards, thus enabling the above mentioned effects such as improved reproducibility etc.
How to Submit
Submission site: https://ecoop17ae.hotcrp.com/
Every submission must include:
A PDF file that describes the artifact and how to use it.
A URL for downloading the artifact.
The PDF of the most recent version of the accepted paper.
When packaging your artifact for submission, please take the following into consideration: Your artifact should be as accessible to the AE committee members as possible, and it should be possible for the AE members to quickly make progress in the investigation of your artifact. Please provide some simple scenarios describing concretely how the artifact is intended to be used; for a tool, this would include specific inputs to provide or actions to take, and expected output or behavior in response to this input. In addition to these very tightly controlled scenarios that you prepare for the AE committee member to try out, it may be very useful if you suggest some variations along the way, such that the AE committee members will be able to see that the artifact is robust enough to tolerate a few experiments.
For artifacts that are tools, one very convenient way for reviewers to learn about your artifact is to include a video showing you using the artifact in a simple scenario, along with verbal comments explaining what is going on.
To avoid problems with software dependencies and installation, it may be very useful if you provide the artifact installed and ready to use on a virtual machine (for VirtualBox, VMware, or a similar widely available platform). The artifact must be made available as a single, self-contained archive file, using a widely supported archive format such as zip or a compressed tar format (e.g., .tgz). Please use widely supported open formats for documents, and preferably the CSV or JSON format for data.
Submitted artifacts will go through a two-phase evaluation:
- Kicking-the-tires: reviewers check the artifact integrity and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). Authors are informed of the outcome and, in case of technical problems, they can help solve them during a brief author response period.
- Artifact assessment: reviewers evaluate the artifacts, checking if they live up to the expectations created by the papers.
Kick-the-tires response period
Authors will be given a 72-hour period to read and respond to the kick-the-tires reports of their artifacts. Authors may be asked for clarifications in case the committee encountered problems that may prevent reviewers from properly evaluating the artifact.
For additional information, clarification, or answers to questions, please contact the ECOOP Artifact Evaluation Chairs:
|Sat 13 May 2017|
|Mon 24 - Wed 26 Apr 2017|
|Wed 19 Apr 2017|
Submission of artifacts
Philipp HallerArtifact Evaluation Co-Chair
KTH Royal Institute of Technology
Michael PradelArtifact Evaluation Co-Chair
Tijs van der StormArtifact Evaluation Co-Chair
CWI & University of Groningen
Luca Della Toffola
University of Waterloo
Northeastern University, USA
MIT CSAIL, USA
Serbia and Montenegro
Carnegie Mellon University
Carnegie Mellon University
Lisa Nguyen Quang Do
Purdue University, USA
Imperial College London
Delft University of Technology, Netherlands