Artifact Evaluation Track and ROSE FestivalICSME 2023
Wed 4 OctDisplayed time zone: Bogota, Lima, Quito, Rio Branco change
10:30 - 12:00 | Machine Learning ApplicationsResearch Track / Industry Track / New Ideas and Emerging Results Track at Session 1 Room - RGD 004 Chair(s): Masud Rahman Dalhousie University | ||
10:30 16mTalk | GPTCloneBench: A comprehensive benchmark of semantic clones and cross-language clones using GPT-3 model and SemanticCloneBench Research Track Ajmain Inqiad Alam University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Farouq Al-omari University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Banani Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan Pre-print | ||
10:46 16mTalk | DeltaNN: Assessing the Impact of Computational Environment Parameters on the Performance of Image Recognition Models Industry Track Nikolaos Louloudakis University of Edinburgh, Perry Gibson University of Glasgow, José Cano University of Glasgow, Ajitha Rajan University of Edinburgh | ||
11:02 16mTalk | You Augment Me: Exploring ChatGPT-based Data Augmentation for Semantic Code Search Research Track Yanlin Wang Sun Yat-sen University, Lianghong Guo Beijing University of Posts and Telecommunications, Ensheng Shi Xi’an Jiaotong University, Wenqing Chen Sun Yat-sen University, Jiachi Chen Sun Yat-sen University, Wanjun Zhong Sun Yat-sen University, Menghan Wang eBay Inc., Hui Li Xiamen University, Ziyu Lyu Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Hongyu Zhang Chongqing University, Zibin Zheng Sun Yat-sen University | ||
11:18 11mTalk | Benchmarking Causal Study to Interpret Large Language Models for Source Code New Ideas and Emerging Results Track Daniel Rodriguez-Cardenas , David Nader Palacio William and Mary, Dipin Khati William & Mary, Henry Burke William & Mary, Denys Poshyvanyk William & Mary | ||
11:29 16mTalk | Deploying Deep Reinforcement Learning Systems: A Taxonomy of Challenges Research Track Ahmed Haj Yahmed École Polytechnique de Montréal, Altaf Allah Abbassi Polytechnique Montreal, Amin Nikanjam École Polytechnique de Montréal, Heng Li Polytechnique Montréal, Foutse Khomh Polytechnique Montréal | ||
11:45 15mLive Q&A | 1:1 Q&A Research Track |
10:30 - 12:00 | Software QualityJournal First Track / Tool Demo Track / New Ideas and Emerging Results Track / Research Track at Session 2 Room - RGD 04 Chair(s): Valentina Lenarduzzi University of Oulu, César França Universidade Federal Rural de Pernambuco | ||
10:30 16mTalk | Featherweight Assisted Vulnerability Discovery Journal First Track David Binkley Loyola University Maryland, Leon Moonen Simula Research Laboratory and BI Norwegian Business School, Sibren Isaacman Loyola University Maryland | ||
10:46 11mTalk | DebtViz: A Tool for Identifying, Measuring, Visualizing, and Monitoring Self-Admitted Technical Debt Tool Demo Track Yikun Li University of Groningen, Mohamed Soliman , Paris Avgeriou University of Groningen, The Netherlands, Maarten van Ittersum | ||
10:57 11mTalk | Mining and Fusing Productivity Metrics with Code Quality Information at Scale Tool Demo Track Harsh Mukeshkumar Shah Dalhousie University, Qurram Zaheer Syed , Bharatwaaj Shankaranarayanan , Indranil Palit Dalhousie University, Arshdeep Singh , Kavya Raval , Kishan Savaliya , Tushar Sharma Dalhousie University Pre-print | ||
11:08 16mTalk | An Investigation of Confusing Code Patterns in JavaScript Journal First Track Adriano Torres Computer Science Department, University of Brasília, Caio Oliveira Computer Science Department, University of Brasília, Marcio Okimoto Computer Science Department, University of Brasília, Diego Marcilio USI Università della Svizzera italiana, Pedro Queiroga Informatics Center, Federal University of Pernambuco, Fernando Castor Utrecht University & Federal University of Pernambuco, Rodrigo Bonifácio Computer Science Department - University of Brasília, Edna Dias Canedo University of Brasilia (UnB), Márcio Ribeiro Federal University of Alagoas, Brazil, Eduardo Monteiro Statistics Department, University of Brasília | ||
11:24 11mTalk | StaticTracker: A Diff Tool for Static Code Warnings Tool Demo Track | ||
11:35 11mTalk | Capturing Contextual Relationships of Buggy Classes for Detecting Quality-Related Bugs New Ideas and Emerging Results Track | ||
11:46 14mLive Q&A | 1:1 Q&A Research Track |
13:30 - 15:00 | Mining Software RepositoriesResearch Track / New Ideas and Emerging Results Track / Industry Track at Session 1 Room - RGD 004 Chair(s): Denys Poshyvanyk William & Mary, Esteban Parra Belmont University | ||
13:30 16mTalk | The Future Can’t Help Fix The Past: Assessing Program Repair In The Wild Research Track Vinay Kabadi The University of Melbourne, Dezhen Kong Zhejiang University, Siyu Xie Zhejiang University, Lingfeng Bao , Gede Artha Azriadi Prana Singapore Management University, Tien-Duy B. Le Singapore Management University, Xuan-Bach D. Le University of Melbourne, David Lo Singapore Management University | ||
13:46 16mTalk | Process Mining from Jira Issues at a Large Company Industry Track Bavo Coremans Thermo Fisher Scientific, Arjen Klomp Thermo Fisher Scientific, Satrio Adi Rukmono , Jacob Krüger Eindhoven University of Technology, Dirk Fahland Eindhoven University of Technology, Michel Chaudron Eindhoven University of Technology, The Netherlands | ||
14:02 16mTalk | Software Bill of Materials Adoption: A Mining Study from GitHub Research Track Sabato Nocera University of Salerno, Simone Romano University of Salerno, Massimiliano Di Penta University of Sannio, Italy, Rita Francese University of Salerno, Giuseppe Scanniello University of Salerno | ||
14:18 11mTalk | An Empirical Study on the Use of Snapshot Testing New Ideas and Emerging Results Track Shun Fujita Kyoto University, Yutaro Kashiwa Nara Institute of Science and Technology, Bin Lin Radboud University, Hajimu Iida Nara Institute of Science and Technology | ||
14:29 16mTalk | A Framework for Automating the Measurement of DevOps Research and Assessment (DORA) Metrics Research Track Brennan Wilkes University of Victoria, Alessandra Maciel Paz Milani University of Victoria, Margaret-Anne Storey University of Victoria | ||
14:45 15mLive Q&A | 1:1 Q&A Research Track |
15:30 - 16:45 | ROSEArtifact Evaluation Track and ROSE Festival at Session 1 Room - RGD 004 Chair(s): Venera Arnaoudova Washington State University, Sonia Haiduc Florida State University | ||
15:30 5mTalk | ROSE Festival Introduction Artifact Evaluation Track and ROSE Festival | ||
15:35 5mTalk | PyAnaDroid: A fully-customizable execution pipeline for benchmarking Android Applications Artifact Evaluation Track and ROSE Festival | ||
15:40 5mTalk | Artifact for What’s in a Name? Linear Temporal Logic Literally Represents Time Lines Artifact Evaluation Track and ROSE Festival Runming Li Carnegie Mellon University, Keerthana Gurushankar Carnegie Mellon University, Marijn Heule Carnegie Mellon University, Kristin Yvonne Rozier Iowa State University | ||
15:45 5mTalk | PASD: A Performance Analysis Approach Through the Statistical Debugging of Kernel Events Artifact Evaluation Track and ROSE Festival | ||
15:50 5mTalk | Interactively exploring API changes and versioning consistency Artifact Evaluation Track and ROSE Festival souhaila serbout Software Institute @ USI, Diana Carolina Munoz Hurtado University of Lugano, Switzerland, Cesare Pautasso Software Institute, Faculty of Informatics, USI Lugano | ||
15:55 5mTalk | Generating Understandable Unit Tests through End-to-End Test Scenario Carving Artifact Evaluation Track and ROSE Festival | ||
16:00 5mTalk | Understanding the NPM Dependencies Ecosystem of a Project Using Virtual Reality - Artifact Artifact Evaluation Track and ROSE Festival David Moreno-Lumbreras Universidad Rey Juan Carlos, Jesus M. Gonzalez-Barahona Universidad Rey Juan Carlos, Michele Lanza Software Institute - USI, Lugano | ||
16:05 5mTalk | DGT-AR: Visualizing Code Dependencies in AR Artifact Evaluation Track and ROSE Festival Dussan Freire-Pozo , Kevin Cespedes-Arancibia , Leonel Merino University of Stuttgart, Alison Fernandez Blanco Pontificia Universidad Católica de Chile, Andres Neyem , Juan Pablo Sandoval Alcocer Pontificia Universidad Católica de Chile | ||
16:10 5mTalk | Calibrating Deep Learning-based Code Smell Detection using Human Feedback Artifact Evaluation Track and ROSE Festival Himesh Nandani Dalhousie University, Mootez Saad Dalhousie University, Tushar Sharma Dalhousie University | ||
16:15 5mTalk | A Component-Sensitive Static Analysis Based Approach for Modeling Intents in Android Apps Artifact Evaluation Track and ROSE Festival Negarsadat Abolhassani University of Southern California, William G.J. Halfond University of Southern California | ||
16:20 5mTalk | Uncovering the Hidden Risks: The Importance of Predicting Bugginess in Untouched Methods Artifact Evaluation Track and ROSE Festival Matteo Esposito University of Rome Tor Vergata, Davide Falessi University of Rome Tor Vergata, Italy | ||
16:25 5mTalk | GPTCloneBench: A comprehensive benchmark of semantic clones and cross-language clones using GPT-3 model and SemanticCloneBench Artifact Evaluation Track and ROSE Festival Ajmain Inqiad Alam University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Farouq Al-omari University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Banani Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan | ||
16:30 5mTalk | RefSearch: A Search Engine for Refactoring Artifact Evaluation Track and ROSE Festival DOI Pre-print Media Attached | ||
16:35 5mTalk | Can We Trust the Default Vulnerabilities Severity? Artifact Evaluation Track and ROSE Festival Matteo Esposito University of Rome Tor Vergata, Sergio Moreschini Tampere University, Valentina Lenarduzzi University of Oulu, David Hastbacka , Davide Falessi University of Rome Tor Vergata, Italy | ||
16:40 5mTalk | ROSE Awards Artifact Evaluation Track and ROSE Festival |
15:30 - 17:00 | Technical Briefing on srcML & srcDiff: Infrastructure to Support Exploring, Analyzing, and Differencing of Source CodeResearch Track at Session 4 Room - RGD 005 Chair(s): Michael J. Decker Bowling Green State University, Jonathan I. Maletic Kent State University This technology briefing is intended for those interested in constructing custom software analysis and manipulation tools to support research. The briefing is also aimed at researchers interested in leveraging syntactic differencing in their investigations. srcML (srcML.org) is an infrastructure consisting of an XML representation for C/C++/C#/Java source code along with efficient parsing technology to convert source code to-and-from the srcML format. srcDiff (srcDiff.org) is an infrastructure supporting syntactic source-code differencing and change analysis. srcDiff leverages srcML along with an efficient differencing algorithm to produce deltas that accurately model developer edits. In this tech briefing, we give an overview of srcML and srcDiff along with a tutorial of how to use them to support research efforts. The briefing is also a forum to seek feedback and input from the community on what new enhancements and features will better support software engineering research. | ||
Thu 5 OctDisplayed time zone: Bogota, Lima, Quito, Rio Branco change
10:30 - 12:00 | Software Testing - 1Research Track / Industry Track at Session 1 Room - RGD 004 Chair(s): Amjed Tahir Massey University | ||
10:30 16mTalk | GMBFL: Optimizing Mutation-Based Fault Localization via Graph Representation Research Track Shumei Wu Beijing University of Chemical Technology, Zheng Li Beijing University of Chemical Technology, Yong Liu Beijing University of Chemical Technology, Xiang Chen Nantong University, Mingyu Li Beijing University of Chemical Technology | ||
10:46 16mTalk | Characterizing the Complexity and Its Impact on Testing in ML-Enabled Systems - A Case Study on Rasa Research Track Junming Cao Fudan University, Bihuan Chen Fudan University, Longjie Hu Fudan University, Jie Gao Singapore University of Technology and Design, Kaifeng Huang Fudan University, Xuezhi Song Fudan University, Xin Peng Fudan University | ||
11:02 16mTalk | Software Testing and Code Refactoring: A Survey with Practitioners Industry Track Danilo Lima , Ronnie de Souza Santos University of Calgary, Guilherme Pires , Sildemir Silva , César França Federal Rural University of Pernambuco (UFRPE), Luiz Fernando Capretz Western University | ||
11:18 16mTalk | A manual categorization of new quality issues on automatically-generated tests Research Track Geraldine Galindo-Gutierrez Exact Sciences and Engineering Research Center (CICEI) - Bolivian Catholic University, Maximiliano Narea Carvajal Pontificia Universidad Católica de Chile, Alison Fernandez Blanco Pontificia Universidad Católica de Chile, Nicolas Anquetil University of Lille, Lille, France, Juan Pablo Sandoval Alcocer Pontificia Universidad Católica de Chile | ||
11:34 16mTalk | Revisiting Machine Learning based Test Case Prioritization for Continuous Integration Research Track | ||
11:50 10mLive Q&A | 1:1 Q&A Research Track |
10:30 - 12:00 | Software ChangesResearch Track / Journal First Track / Industry Track / Tool Demo Track at Session 2 Room - RGD 04 Chair(s): Tushar Sharma Dalhousie University, Shurui Zhou University of Toronto | ||
10:30 16mTalk | CCBERT: Self-Supervised Code Change Representation Learning Research Track Xin Zhou Singapore Management University, Singapore, Bowen Xu North Carolina State University, DongGyun Han Royal Holloway, University of London, Zhou Yang Singapore Management University, Junda He Singapore Management University, David Lo Singapore Management University Pre-print | ||
10:46 16mTalk | Identifying Defect-Inducing Changes in Visual Code Industry Track Pre-print | ||
11:02 16mTalk | On the Relation of Method Popularity to Breaking Changes in the Maven Ecosystem Journal First Track Mehdi Keshani Delft University of Technology, Simcha Vos Delft University of Technology, Sebastian Proksch Delft University of Technology, Netherlands Link to publication | ||
11:18 11mTalk | Wait, wasn't that code here before? Detecting Outdated Software Documentation Tool Demo Track Wen Siang Tan The University of Adelaide, Markus Wagner Monash University, Australia, Christoph Treude University of Melbourne | ||
11:29 16mTalk | Recommending Code Reviews Leveraging Code Changes with Structured Information Retrieval Research Track Ohiduzzaman Shuvo Dalhousie University, Parvez Mahbub Dalhousie University, Masud Rahman Dalhousie University | ||
11:45 15mLive Q&A | 1:1 Q&A Research Track |
13:30 - 15:00 | Security and Program RepairResearch Track / Industry Track at Session 1 Room - RGD 004 Chair(s): Quentin Stiévenart Université du Québec à Montréal (UQAM), Ashkan Sami Edinburgh Napier University | ||
13:30 16mTalk | Enhancing Code Language Models for Program Repair by Curricular Fine-tuning Framework Research Track Sichong Hao Faculty of Computing, Harbin Institute of Technology, Xianjun Shi Faculty of Computing, Harbin Institute of Technology, Hongwei Liu Faculty of Computing, Harbin Institute of Technology, Yanjun Shu Faculty of Computing, Harbin Institute of Technology | ||
13:46 16mTalk | ScaleFix: An Automated Repair of UI Scaling Accessibility Issues in Android Applications Research Track Ali S. Alotaibi University of Southern California, Paul T. Chiou University of Southern California, Fazle Mohammed Tawsif University of Southern California, William G.J. Halfond University of Southern California | ||
14:02 16mTalk | Finding an Optimal Set of Static Analyzers To Detect Software Vulnerabilities Industry Track Jiaqi He University of Alberta, Revan MacQueen University of Alberta, Natalie Bombardieri University of Alberta, Karim Ali University of Alberta, James Wright University of Alberta, Cristina Cifuentes Oracle Labs | ||
14:18 16mTalk | DockerCleaner: Automatic Repair of Security Smells in Dockerfiles Research Track Quang-Cuong Bui Hamburg University of Technology, Malte Laukötter Hamburg University of Technology, Riccardo Scandariato Hamburg University of Technology Pre-print | ||
14:34 16mTalk | Exploring Security Commits in Python Research Track Shiyu Sun George Mason University, Shu Wang George Mason University, Xinda Wang George Mason University, Yunlong Xing George Mason University, Elisa Zhang Dougherty Valley High School, Kun Sun George Mason University Pre-print | ||
14:50 10mLive Q&A | 1:1 Q&A Research Track |
15:30 - 17:00 | Software FaultsIndustry Track / Research Track / Journal First Track at Session 1 Room - RGD 004 Chair(s): Masud Rahman Dalhousie University, Ashkan Sami Edinburgh Napier University | ||
15:30 16mTalk | An Empirical Study on Fault Diagnosisa in Robotic Systems Research Track Xuezhi Song Fudan University, Yi Li , Zhen Dong Fudan University, China, Shuning Liu Fudan University, Junming Cao Fudan University, Xin Peng Fudan University | ||
15:46 16mTalk | Predicting Defective Visual Code Changes in a Multi-Language AAA Video Game Project Industry Track Pre-print | ||
16:02 16mTalk | An annotation-based approach for finding bugs in neural network programs Journal First Track Mohammad Rezaalipour Software Institute @ USI, Carlo A. Furia Università della Svizzera italiana (USI) | ||
16:18 11mTalk | Evaluation of Cross-Lingual Bug Localization: Two Industrial Cases Industry Track Shinpei Hayashi Tokyo Institute of Technology, Takashi Kobayashi Tokyo Institute of Technology, Tadahisa Kato Hitachi, Ltd. DOI Pre-print | ||
16:29 16mTalk | An Empirical Study on Bugs Inside PyTorch: A Replication Study Research Track Sharon Chee Yin Ho Concordia University, Vahid Majdinasab Polytechnique Montréal, Mohayeminul Islam University of Alberta, Diego Costa Concordia University, Canada, Emad Shihab Concordia Univeristy, Foutse Khomh Polytechnique Montréal, Sarah Nadi University of Alberta, Muhammad Raza Queen's University | ||
16:45 15mLive Q&A | 1:1 Q&A Research Track |
15:30 - 17:00 | Program AnalysisResearch Track / Journal First Track / Industry Track at Session 2 Room - RGD 04 Chair(s): Fabio Petrillo École de technologie supérieure (ÉTS), Montréal -- Université du Québec, Mark Hills Appalachian State University | ||
15:30 16mTalk | Slicing Shared-Memory Concurrent Programs, The Threaded System Dependence Graph Revisited Research Track Carlos Galindo Universitat Politècnica de València, Marisa Llorens Universitat Politècnica de València, Sergio Perez Rubio Universitat Politècnica de València, Josep Silva Universitat Politècnica de València | ||
15:46 16mTalk | An Expressive and Modular Layer Activation Mechanism for Context-Oriented Programming Journal First Track Paul Leger Universidad Católica del Norte, Chile, Nicolás Cardozo Universidad de los Andes, Hidehiko Masuhara Tokyo Institute of Technology Link to publication DOI | ||
16:02 16mTalk | Dynamic Slicing of WebAssembly Binaries Research Track Quentin Stiévenart Université du Québec à Montréal (UQAM), David Binkley Loyola University Maryland, Coen De Roover Vrije Universiteit Brussel Pre-print | ||
16:18 11mTalk | OLA: Property Directed Outer Loop Abstraction for Efficient Verification of Reactive Systems Industry Track | ||
16:29 16mTalk | A Component-Sensitive Static Analysis Based Approach for Modeling Intents in Android Apps Research Track Negarsadat Abolhassani University of Southern California, William G.J. Halfond University of Southern California | ||
16:45 15mLive Q&A | 1:1 Q&A Research Track |
Fri 6 OctDisplayed time zone: Bogota, Lima, Quito, Rio Branco change
10:30 - 12:00 | Software Testing - 2Tool Demo Track / Industry Track / Research Track / New Ideas and Emerging Results Track at Session 1 Room - RGD 004 Chair(s): Nicolas Archila , Amjed Tahir Massey University | ||
10:30 16mTalk | A Guided Mutation Strategy for Smart Contract Fuzzing Research Track Songyan Ji Harbin Institute of Technology, Jian Dong Harbin Institute of Technology, Jin Wu , Lishi Lu Harbin Institute of Technology | ||
10:46 11mTalk | How Developers Implement Property-Based Tests New Ideas and Emerging Results Track Arthur Corgozinho Federal University of Minas Gerais (UFMG), Henrique Rocha Loyola University Maryland, USA, Marco Tulio Valente Federal University of Minas Gerais, Brazil | ||
10:57 16mTalk | Cost Reduction on Testing Evolving Cancer Registry System Industry Track Erblin Isaku Simula Research Laboratory, and University of Oslo (UiO), Hassan Sartaj Simula Research Laboratory, Christoph Laaber Simula Research Laboratory, Tao Yue Beihang University, Shaukat Ali Simula Research Laboratory and Oslo Metropolitan University, Thomas Schwitalla Cancer Registry of Norway, Jan F. Nygård Cancer Registry of Norway Pre-print | ||
11:13 11mTalk | aNNoTest: An Annotation-based Test Generation Tool for Neural Network Programs Tool Demo Track Mohammad Rezaalipour Software Institute @ USI, Carlo A. Furia Università della Svizzera italiana (USI) | ||
11:24 16mTalk | Specification-based Test Case Generation for C++ Engineering Software Industry Track Michael Moser Software Competence Center Hagenberg GmbH, Michael Pfeiffer , Christina Piereder , Peter Hamberger , Thomas Luger , Claus Klammer | ||
11:40 11mTalk | Artisan: An Action-Based Test Carving Tool for Android Apps Tool Demo Track Alessio Gambi IMC University of Applied Sciences Krems, Mengzhen Li University of Minnesota, Mattia Fazzini University of Minnesota | ||
11:51 9mLive Q&A | 1:1 Q&A Research Track |
Unscheduled Events
Not scheduled Talk | Artisan: An Action-Based Test Carving Tool for Android Apps Artifact Evaluation Track and ROSE Festival Alessio Gambi IMC University of Applied Sciences Krems, Mengzhen Li University of Minnesota, Mattia Fazzini University of Minnesota |
Accepted Papers
Call for Papers
Goal and Scope
The ICSME 2023 Joint Artifact Evaluation Track and ROSE (Recognizing and Rewarding Open Science in SE) Festival is a special track that aims to promote, reward and celebrate open science in Software Engineering research. Authors of accepted papers to all ICSME, SCAM, and VISSOFT Technical tracks can submit their artifacts for evaluation. Papers will be given the IEEE Open Research Object or Research Object Reviewed badges if their corresponding artifacts meet certain conditions (see below).
If you already know what these badges mean, you can skip to the call for contributions. If you want to learn about the badges, keep reading!
What Artifacts are Accepted?
Artifacts of interest include (but are not limited to) the following:
- Software, which are implementations of systems or algorithms potentially useful in other studies.
- Automated experiments that replicate the study in the accepted paper.
- Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.
- Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.
- Qualitative artifacts such as interview scripts and survey templates.
This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list. For additional types of artifacts, please see here.
What Are the Criteria for “Open Research Object” or “Research Object Reviewed” Badges?
Open Research Object |
---|
A paper will be awarded the IEEE “Open Research Object” badge if its artifact is placed in a publicly accessible archival repository, and a DOI or link to this persistent repository is provided.
Research Object Reviewed |
---|
A paper will be awarded the IEEE “Research Object Reviewed” badge if its artifact is documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation. Moreover, the documentation and structure of the artifact should be good enough so that reuse and repurposing are facilitated. The following are the meaning of the various above-mentioned terms:
- Documented: At a minimum, an inventory of artifacts is included, and sufficient description is provided to enable the artifacts to be exercised.
- Consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.
- Complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package, then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.)
- Exercisable: Included scripts and/or software used to generate the results in the associated paper can be successfully executed, and included data can be accessed and appropriately manipulated.
A paper can be given both badges if the artifact is open, exercisable, well-structured, and well-documented so as to allow reuse and repurposing. IEEE has two other categories, “Results Reproduced” and “Results Replicated”, however, they only apply if a subsequent study has been conducted by a person or team other than the authors to ensure that the main findings remain. As the artifact evaluation process is not as comprehensive as a subsequent study, similar to ICSME 2022, we only assign papers with the “Open Research Object” and “Research Object Reviewed” badges.
If you want to learn more about open science, the badging system, and the importance of creating open research objects, you can read here and here
Call for Artifact Contributions
Authors of accepted papers to all ICSME, SCAM, and VISSOFT 2022 tracks are invited to submit artifacts that enable the reproducibility and replicability of their results to the artifact evaluation track. Depending on the assessment, we will award badges to be displayed in those papers to recognize their contributions to open science.
All awarded artifacts will be invited to present at The ROSE Festival (Recognizing and Rewarding Open Science in SE). The ROSE Festival is a special session within ICSME, is a session where researchers can receive public credit for facilitating and participating in open science.
The ICSME artifact evaluation track uses a single-anonymous review process.
Best Artifact Award
There will be a Best Artifact Award for each venue (ICSME, VISSOFT, SCAM) to recognize the effort of authors creating and sharing outstanding research artifacts. The winners of the awards will be decided during the ROSE Festival.
Submission and Review
Note that all submissions, reviewing, and notifications for this track will be via the ICSME 2023 EasyChair conference management system (“Artifact Evaluation” Track). Authors must submit the following:
- Title and authors of the accepted paper.
- A simple description of the artifact to be evaluated is given as an abstract (1 paragraph)
- A 1-page PDF containing: (i) a link to the artifact to be evaluated (see the steps below to prepare this link), (ii) requirements to run the artifact (RAM, disk, packages, specific devices, operating system, etc).
Authors of the papers accepted to the tracks must perform the following steps to submit an artifact:
Step 1: Preparing the Artifact
There are two options depending on the nature of the artifacts: Installation Package or Simple Package. In both cases, the configuration and installation of the artifact should take less than 30 minutes. Otherwise, the artifact is unlikely to be endorsed simply because the committee will not have sufficient time to evaluate it.
-
Installation Package: If the artifact consists of a tool or software system, then the authors need to prepare an installation package so that the tool can be installed and run in the evaluator’s environment. Provide enough associated instruction, code, and data such that a person with a CS background, with a reasonable knowledge of scripting, build tools, etc., could install, build, and run the code. If the artifact contains or requires the use of a special tool or any other non-trivial piece of software, the authors must provide a VirtualBox VM image or a Docker container image with a working environment containing the artifact and all the necessary tools. Similarly, if the artifact requires specific hardware, it should be clearly documented in the requirements (see Step 3 – Documenting the Artifact). Note that we expect that the artifacts will have been vetted on a clean machine before submission.
-
Simple Package: If the artifact contains only documents that can be used with a simple text editor, a PDF viewer, or some other common tool (e.g., a spreadsheet program in its basic configuration), the authors can just save all documents in a single package file (zip or tar.gz).
Step 2: Making the Artifact Available for Review
Authors need to make the packaged artifact (installation package or simple package) available so that the Evaluation Committee can access it. If the authors are aiming for the Open Research Object badge, the artifact needs to be (i) publicly accessible, and (ii) the link to the artifact needs to be included in the Camera Ready (CR) version. The process for awarding badges is conducted after the CR deadline.
Note that links to individual websites or links to temporary drives (e.g. Google) are non-persistent, and thus artifacts placed in such locations will not be considered for the available badge. Examples of persistent storage that offer DOI are IEEE Data Port, Zenodo, figshare, and Open Science Framework. For installation packages, authors can use CodeOcean, a cloud-based computational reproducibility platform that is fully integrated with IEEE Xplore. Other suitable providers can be found here. Institutional repositories are acceptable. In all cases, repositories used to archive data should have a declared plan to enable permanent accessibility.
One relatively simple way to make your packaged artifact publicly accessible:
- Create a GitHub repo.
- Register the repo at Zenodo.org. For details on that process, see Citable Code Guidelines.
- Make a release at Github, at which time Zenodo will automatically grab a copy of that repo and issue a Digital Object Identifier (DOI) e.g. https://doi.org/10.5281/zenodo.4308746.
Artifacts do not necessarily have to be publicly accessible for the review process (if the goal is only the “Research Object Reviewed” badge. In this case, the authors are asked to provide a private link or a password-protected link.
Step 3: Documenting the Artifact
Authors need to provide documentation explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in sufficient detail. The documentation must describe only the technicalities of the artifacts and uses of the artifact that are not already described in the paper. The artifact should contain the following documents (in markdown plain text format within the root folder):
- A README.md main file describing what the artifact does and how and where it can be obtained (with hidden links and access password if necessary). There should be a clear description, step-by-step, of how to reproduce the results presented in the paper. Reviewers should not need to figure out on their own what the input is for a specific step or what output is produced (and where). All usage instructions should be explicitly documented in the step-by-step instructions of the README.md file. Provide an explicit mapping between the results and claims reported in the paper and the steps listed in README.md for easy traceability.
- A LICENSE.md file describing the distribution rights. Note that to score “Open Research Object” badge, then that license needs to an open-source license compliant with OSI
- A REQUIREMENTS.md file describing all necessary software/hardware prerequisites.
- An INSTALL.md file with installation instructions. These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, information on what output to expect that confirms that the code is installed and working; and that the code is doing something interesting and useful. Include at the end of the INSTALL.md the configuration for which the installation was tested.
- Place any additional information that does not fit the required type of information in a separate document (ADDITIONAL_INFORMATION.md) that you think might be useful. A copy of the accepted paper in pdf format.
Submission Link
Please use the following link: https://easychair.org/my/conference?conf=icsme2023