Artifact Evaluation Track and ROSE FestivalICSME 2023
Wed 4 OctDisplayed time zone: Bogota, Lima, Quito, Rio Branco change
15:30 - 16:45 | ROSEArtifact Evaluation Track and ROSE Festival at Session 1 Room - RGD 004 Chair(s): Venera Arnaoudova Washington State University, Sonia Haiduc Florida State University | ||
15:30 5mTalk | ROSE Festival Introduction Artifact Evaluation Track and ROSE Festival | ||
15:35 5mTalk | PyAnaDroid: A fully-customizable execution pipeline for benchmarking Android Applications Artifact Evaluation Track and ROSE Festival | ||
15:40 5mTalk | Artifact for What’s in a Name? Linear Temporal Logic Literally Represents Time Lines Artifact Evaluation Track and ROSE Festival Runming Li Carnegie Mellon University, Keerthana Gurushankar Carnegie Mellon University, Marijn Heule Carnegie Mellon University, Kristin Yvonne Rozier Iowa State University | ||
15:45 5mTalk | PASD: A Performance Analysis Approach Through the Statistical Debugging of Kernel Events Artifact Evaluation Track and ROSE Festival | ||
15:50 5mTalk | Interactively exploring API changes and versioning consistency Artifact Evaluation Track and ROSE Festival souhaila serbout Software Institute @ USI, Diana Carolina Munoz Hurtado University of Lugano, Switzerland, Cesare Pautasso Software Institute, Faculty of Informatics, USI Lugano | ||
15:55 5mTalk | Generating Understandable Unit Tests through End-to-End Test Scenario Carving Artifact Evaluation Track and ROSE Festival | ||
16:00 5mTalk | Understanding the NPM Dependencies Ecosystem of a Project Using Virtual Reality - Artifact Artifact Evaluation Track and ROSE Festival David Moreno-Lumbreras Universidad Rey Juan Carlos, Jesus M. Gonzalez-Barahona Universidad Rey Juan Carlos, Michele Lanza Software Institute - USI, Lugano | ||
16:05 5mTalk | DGT-AR: Visualizing Code Dependencies in AR Artifact Evaluation Track and ROSE Festival Dussan Freire-Pozo , Kevin Cespedes-Arancibia , Leonel Merino University of Stuttgart, Alison Fernandez Blanco Pontificia Universidad Católica de Chile, Andres Neyem , Juan Pablo Sandoval Alcocer Pontificia Universidad Católica de Chile | ||
16:10 5mTalk | Calibrating Deep Learning-based Code Smell Detection using Human Feedback Artifact Evaluation Track and ROSE Festival Himesh Nandani Dalhousie University, Mootez Saad Dalhousie University, Tushar Sharma Dalhousie University | ||
16:15 5mTalk | A Component-Sensitive Static Analysis Based Approach for Modeling Intents in Android Apps Artifact Evaluation Track and ROSE Festival Negarsadat Abolhassani University of Southern California, William G.J. Halfond University of Southern California | ||
16:20 5mTalk | Uncovering the Hidden Risks: The Importance of Predicting Bugginess in Untouched Methods Artifact Evaluation Track and ROSE Festival Matteo Esposito University of Rome Tor Vergata, Davide Falessi University of Rome Tor Vergata, Italy | ||
16:25 5mTalk | GPTCloneBench: A comprehensive benchmark of semantic clones and cross-language clones using GPT-3 model and SemanticCloneBench Artifact Evaluation Track and ROSE Festival Ajmain Inqiad Alam University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Farouq Al-omari University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Banani Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan | ||
16:30 5mTalk | RefSearch: A Search Engine for Refactoring Artifact Evaluation Track and ROSE Festival DOI Pre-print Media Attached | ||
16:35 5mTalk | Can We Trust the Default Vulnerabilities Severity? Artifact Evaluation Track and ROSE Festival Matteo Esposito University of Rome Tor Vergata, Sergio Moreschini Tampere University, Valentina Lenarduzzi University of Oulu, David Hastbacka , Davide Falessi University of Rome Tor Vergata, Italy | ||
16:40 5mTalk | ROSE Awards Artifact Evaluation Track and ROSE Festival |
Unscheduled Events
Not scheduled Talk | Artisan: An Action-Based Test Carving Tool for Android Apps Artifact Evaluation Track and ROSE Festival Alessio Gambi IMC University of Applied Sciences Krems, Mengzhen Li University of Minnesota, Mattia Fazzini University of Minnesota |
Accepted Papers
Call for Papers
Goal and Scope
The ICSME 2023 Joint Artifact Evaluation Track and ROSE (Recognizing and Rewarding Open Science in SE) Festival is a special track that aims to promote, reward and celebrate open science in Software Engineering research. Authors of accepted papers to all ICSME, SCAM, and VISSOFT Technical tracks can submit their artifacts for evaluation. Papers will be given the IEEE Open Research Object or Research Object Reviewed badges if their corresponding artifacts meet certain conditions (see below).
If you already know what these badges mean, you can skip to the call for contributions. If you want to learn about the badges, keep reading!
What Artifacts are Accepted?
Artifacts of interest include (but are not limited to) the following:
- Software, which are implementations of systems or algorithms potentially useful in other studies.
- Automated experiments that replicate the study in the accepted paper.
- Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.
- Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.
- Qualitative artifacts such as interview scripts and survey templates.
This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list. For additional types of artifacts, please see here.
What Are the Criteria for “Open Research Object” or “Research Object Reviewed” Badges?
![]() | Open Research Object |
---|
A paper will be awarded the IEEE “Open Research Object” badge if its artifact is placed in a publicly accessible archival repository, and a DOI or link to this persistent repository is provided.
![]() | Research Object Reviewed |
---|
A paper will be awarded the IEEE “Research Object Reviewed” badge if its artifact is documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation. Moreover, the documentation and structure of the artifact should be good enough so that reuse and repurposing are facilitated. The following are the meaning of the various above-mentioned terms:
- Documented: At a minimum, an inventory of artifacts is included, and sufficient description is provided to enable the artifacts to be exercised.
- Consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.
- Complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package, then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.)
- Exercisable: Included scripts and/or software used to generate the results in the associated paper can be successfully executed, and included data can be accessed and appropriately manipulated.
A paper can be given both badges if the artifact is open, exercisable, well-structured, and well-documented so as to allow reuse and repurposing. IEEE has two other categories, “Results Reproduced” and “Results Replicated”, however, they only apply if a subsequent study has been conducted by a person or team other than the authors to ensure that the main findings remain. As the artifact evaluation process is not as comprehensive as a subsequent study, similar to ICSME 2022, we only assign papers with the “Open Research Object” and “Research Object Reviewed” badges.
If you want to learn more about open science, the badging system, and the importance of creating open research objects, you can read here and here
Call for Artifact Contributions
Authors of accepted papers to all ICSME, SCAM, and VISSOFT 2022 tracks are invited to submit artifacts that enable the reproducibility and replicability of their results to the artifact evaluation track. Depending on the assessment, we will award badges to be displayed in those papers to recognize their contributions to open science.
All awarded artifacts will be invited to present at The ROSE Festival (Recognizing and Rewarding Open Science in SE). The ROSE Festival is a special session within ICSME, is a session where researchers can receive public credit for facilitating and participating in open science.
The ICSME artifact evaluation track uses a single-anonymous review process.
Best Artifact Award
There will be a Best Artifact Award for each venue (ICSME, VISSOFT, SCAM) to recognize the effort of authors creating and sharing outstanding research artifacts. The winners of the awards will be decided during the ROSE Festival.
Submission and Review
Note that all submissions, reviewing, and notifications for this track will be via the ICSME 2023 EasyChair conference management system (“Artifact Evaluation” Track). Authors must submit the following:
- Title and authors of the accepted paper.
- A simple description of the artifact to be evaluated is given as an abstract (1 paragraph)
- A 1-page PDF containing: (i) a link to the artifact to be evaluated (see the steps below to prepare this link), (ii) requirements to run the artifact (RAM, disk, packages, specific devices, operating system, etc).
Authors of the papers accepted to the tracks must perform the following steps to submit an artifact:
Step 1: Preparing the Artifact
There are two options depending on the nature of the artifacts: Installation Package or Simple Package. In both cases, the configuration and installation of the artifact should take less than 30 minutes. Otherwise, the artifact is unlikely to be endorsed simply because the committee will not have sufficient time to evaluate it.
-
Installation Package: If the artifact consists of a tool or software system, then the authors need to prepare an installation package so that the tool can be installed and run in the evaluator’s environment. Provide enough associated instruction, code, and data such that a person with a CS background, with a reasonable knowledge of scripting, build tools, etc., could install, build, and run the code. If the artifact contains or requires the use of a special tool or any other non-trivial piece of software, the authors must provide a VirtualBox VM image or a Docker container image with a working environment containing the artifact and all the necessary tools. Similarly, if the artifact requires specific hardware, it should be clearly documented in the requirements (see Step 3 – Documenting the Artifact). Note that we expect that the artifacts will have been vetted on a clean machine before submission.
-
Simple Package: If the artifact contains only documents that can be used with a simple text editor, a PDF viewer, or some other common tool (e.g., a spreadsheet program in its basic configuration), the authors can just save all documents in a single package file (zip or tar.gz).
Step 2: Making the Artifact Available for Review
Authors need to make the packaged artifact (installation package or simple package) available so that the Evaluation Committee can access it. If the authors are aiming for the Open Research Object badge, the artifact needs to be (i) publicly accessible, and (ii) the link to the artifact needs to be included in the Camera Ready (CR) version. The process for awarding badges is conducted after the CR deadline.
Note that links to individual websites or links to temporary drives (e.g. Google) are non-persistent, and thus artifacts placed in such locations will not be considered for the available badge. Examples of persistent storage that offer DOI are IEEE Data Port, Zenodo, figshare, and Open Science Framework. For installation packages, authors can use CodeOcean, a cloud-based computational reproducibility platform that is fully integrated with IEEE Xplore. Other suitable providers can be found here. Institutional repositories are acceptable. In all cases, repositories used to archive data should have a declared plan to enable permanent accessibility.
One relatively simple way to make your packaged artifact publicly accessible:
- Create a GitHub repo.
- Register the repo at Zenodo.org. For details on that process, see Citable Code Guidelines.
- Make a release at Github, at which time Zenodo will automatically grab a copy of that repo and issue a Digital Object Identifier (DOI) e.g. https://doi.org/10.5281/zenodo.4308746.
Artifacts do not necessarily have to be publicly accessible for the review process (if the goal is only the “Research Object Reviewed” badge. In this case, the authors are asked to provide a private link or a password-protected link.
Step 3: Documenting the Artifact
Authors need to provide documentation explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in sufficient detail. The documentation must describe only the technicalities of the artifacts and uses of the artifact that are not already described in the paper. The artifact should contain the following documents (in markdown plain text format within the root folder):
- A README.md main file describing what the artifact does and how and where it can be obtained (with hidden links and access password if necessary). There should be a clear description, step-by-step, of how to reproduce the results presented in the paper. Reviewers should not need to figure out on their own what the input is for a specific step or what output is produced (and where). All usage instructions should be explicitly documented in the step-by-step instructions of the README.md file. Provide an explicit mapping between the results and claims reported in the paper and the steps listed in README.md for easy traceability.
- A LICENSE.md file describing the distribution rights. Note that to score “Open Research Object” badge, then that license needs to an open-source license compliant with OSI
- A REQUIREMENTS.md file describing all necessary software/hardware prerequisites.
- An INSTALL.md file with installation instructions. These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, information on what output to expect that confirms that the code is installed and working; and that the code is doing something interesting and useful. Include at the end of the INSTALL.md the configuration for which the installation was tested.
- Place any additional information that does not fit the required type of information in a separate document (ADDITIONAL_INFORMATION.md) that you think might be useful. A copy of the accepted paper in pdf format.
Submission Link
Please use the following link: https://easychair.org/my/conference?conf=icsme2023