Registered Reports TrackICSME 2023
Wed 4 OctDisplayed time zone: Bogota, Lima, Quito, Rio Branco change
10:30 - 12:00 | Machine Learning ApplicationsResearch Track / Industry Track / New Ideas and Emerging Results Track at Session 1 Room - RGD 004 Chair(s): Masud Rahman Dalhousie University | ||
10:30 16mTalk | GPTCloneBench: A comprehensive benchmark of semantic clones and cross-language clones using GPT-3 model and SemanticCloneBench Research Track Ajmain Inqiad Alam University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Farouq Al-omari University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Banani Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan Pre-print | ||
10:46 16mTalk | DeltaNN: Assessing the Impact of Computational Environment Parameters on the Performance of Image Recognition Models Industry Track Nikolaos Louloudakis University of Edinburgh, Perry Gibson University of Glasgow, José Cano University of Glasgow, Ajitha Rajan University of Edinburgh | ||
11:02 16mTalk | You Augment Me: Exploring ChatGPT-based Data Augmentation for Semantic Code Search Research Track Yanlin Wang Sun Yat-sen University, Lianghong Guo Beijing University of Posts and Telecommunications, Ensheng Shi Xi’an Jiaotong University, Wenqing Chen Sun Yat-sen University, Jiachi Chen Sun Yat-sen University, Wanjun Zhong Sun Yat-sen University, Menghan Wang eBay Inc., Hui Li Xiamen University, Ziyu Lyu Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Hongyu Zhang Chongqing University, Zibin Zheng Sun Yat-sen University | ||
11:18 11mTalk | Benchmarking Causal Study to Interpret Large Language Models for Source Code New Ideas and Emerging Results Track Daniel Rodriguez-Cardenas , David Nader Palacio William and Mary, Dipin Khati William & Mary, Henry Burke William & Mary, Denys Poshyvanyk William & Mary | ||
11:29 16mTalk | Deploying Deep Reinforcement Learning Systems: A Taxonomy of Challenges Research Track Ahmed Haj Yahmed École Polytechnique de Montréal, Altaf Allah Abbassi Polytechnique Montreal, Amin Nikanjam École Polytechnique de Montréal, Heng Li Polytechnique Montréal, Foutse Khomh Polytechnique Montréal | ||
11:45 15mLive Q&A | 1:1 Q&A Research Track |
10:30 - 12:00 | Software QualityJournal First Track / Tool Demo Track / New Ideas and Emerging Results Track / Research Track at Session 2 Room - RGD 04 Chair(s): Valentina Lenarduzzi University of Oulu, César França Universidade Federal Rural de Pernambuco | ||
10:30 16mTalk | Featherweight Assisted Vulnerability Discovery Journal First Track David Binkley Loyola University Maryland, Leon Moonen Simula Research Laboratory and BI Norwegian Business School, Sibren Isaacman Loyola University Maryland | ||
10:46 11mTalk | DebtViz: A Tool for Identifying, Measuring, Visualizing, and Monitoring Self-Admitted Technical Debt Tool Demo Track Yikun Li University of Groningen, Mohamed Soliman , Paris Avgeriou University of Groningen, The Netherlands, Maarten van Ittersum | ||
10:57 11mTalk | Mining and Fusing Productivity Metrics with Code Quality Information at Scale Tool Demo Track Harsh Mukeshkumar Shah Dalhousie University, Qurram Zaheer Syed , Bharatwaaj Shankaranarayanan , Indranil Palit Dalhousie University, Arshdeep Singh , Kavya Raval , Kishan Savaliya , Tushar Sharma Dalhousie University Pre-print | ||
11:08 16mTalk | An Investigation of Confusing Code Patterns in JavaScript Journal First Track Adriano Torres Computer Science Department, University of Brasília, Caio Oliveira Computer Science Department, University of Brasília, Marcio Okimoto Computer Science Department, University of Brasília, Diego Marcilio USI Università della Svizzera italiana, Pedro Queiroga Informatics Center, Federal University of Pernambuco, Fernando Castor Utrecht University & Federal University of Pernambuco, Rodrigo Bonifácio Computer Science Department - University of Brasília, Edna Dias Canedo University of Brasilia (UnB), Márcio Ribeiro Federal University of Alagoas, Brazil, Eduardo Monteiro Statistics Department, University of Brasília | ||
11:24 11mTalk | StaticTracker: A Diff Tool for Static Code Warnings Tool Demo Track | ||
11:35 11mTalk | Capturing Contextual Relationships of Buggy Classes for Detecting Quality-Related Bugs New Ideas and Emerging Results Track | ||
11:46 14mLive Q&A | 1:1 Q&A Research Track |
13:30 - 15:00 | Mining Software RepositoriesResearch Track / New Ideas and Emerging Results Track / Industry Track at Session 1 Room - RGD 004 Chair(s): Denys Poshyvanyk William & Mary, Esteban Parra Belmont University | ||
13:30 16mTalk | The Future Can’t Help Fix The Past: Assessing Program Repair In The Wild Research Track Vinay Kabadi The University of Melbourne, Dezhen Kong Zhejiang University, Siyu Xie Zhejiang University, Lingfeng Bao , Gede Artha Azriadi Prana Singapore Management University, Tien-Duy B. Le Singapore Management University, Xuan-Bach D. Le University of Melbourne, David Lo Singapore Management University | ||
13:46 16mTalk | Process Mining from Jira Issues at a Large Company Industry Track Bavo Coremans Thermo Fisher Scientific, Arjen Klomp Thermo Fisher Scientific, Satrio Adi Rukmono , Jacob Krüger Eindhoven University of Technology, Dirk Fahland Eindhoven University of Technology, Michel Chaudron Eindhoven University of Technology, The Netherlands | ||
14:02 16mTalk | Software Bill of Materials Adoption: A Mining Study from GitHub Research Track Sabato Nocera University of Salerno, Simone Romano University of Salerno, Massimiliano Di Penta University of Sannio, Italy, Rita Francese University of Salerno, Giuseppe Scanniello University of Salerno | ||
14:18 11mTalk | An Empirical Study on the Use of Snapshot Testing New Ideas and Emerging Results Track Shun Fujita Kyoto University, Yutaro Kashiwa Nara Institute of Science and Technology, Bin Lin Radboud University, Hajimu Iida Nara Institute of Science and Technology | ||
14:29 16mTalk | A Framework for Automating the Measurement of DevOps Research and Assessment (DORA) Metrics Research Track Brennan Wilkes University of Victoria, Alessandra Maciel Paz Milani University of Victoria, Margaret-Anne Storey University of Victoria | ||
14:45 15mLive Q&A | 1:1 Q&A Research Track |
15:30 - 16:45 | ROSEArtifact Evaluation Track and ROSE Festival at Session 1 Room - RGD 004 Chair(s): Venera Arnaoudova Washington State University, Sonia Haiduc Florida State University | ||
15:30 5mTalk | ROSE Festival Introduction Artifact Evaluation Track and ROSE Festival | ||
15:35 5mTalk | PyAnaDroid: A fully-customizable execution pipeline for benchmarking Android Applications Artifact Evaluation Track and ROSE Festival | ||
15:40 5mTalk | Artifact for What’s in a Name? Linear Temporal Logic Literally Represents Time Lines Artifact Evaluation Track and ROSE Festival Runming Li Carnegie Mellon University, Keerthana Gurushankar Carnegie Mellon University, Marijn Heule Carnegie Mellon University, Kristin Yvonne Rozier Iowa State University | ||
15:45 5mTalk | PASD: A Performance Analysis Approach Through the Statistical Debugging of Kernel Events Artifact Evaluation Track and ROSE Festival | ||
15:50 5mTalk | Interactively exploring API changes and versioning consistency Artifact Evaluation Track and ROSE Festival souhaila serbout Software Institute @ USI, Diana Carolina Munoz Hurtado University of Lugano, Switzerland, Cesare Pautasso Software Institute, Faculty of Informatics, USI Lugano | ||
15:55 5mTalk | Generating Understandable Unit Tests through End-to-End Test Scenario Carving Artifact Evaluation Track and ROSE Festival | ||
16:00 5mTalk | Understanding the NPM Dependencies Ecosystem of a Project Using Virtual Reality - Artifact Artifact Evaluation Track and ROSE Festival David Moreno-Lumbreras Universidad Rey Juan Carlos, Jesus M. Gonzalez-Barahona Universidad Rey Juan Carlos, Michele Lanza Software Institute - USI, Lugano | ||
16:05 5mTalk | DGT-AR: Visualizing Code Dependencies in AR Artifact Evaluation Track and ROSE Festival Dussan Freire-Pozo , Kevin Cespedes-Arancibia , Leonel Merino University of Stuttgart, Alison Fernandez Blanco Pontificia Universidad Católica de Chile, Andres Neyem , Juan Pablo Sandoval Alcocer Pontificia Universidad Católica de Chile | ||
16:10 5mTalk | Calibrating Deep Learning-based Code Smell Detection using Human Feedback Artifact Evaluation Track and ROSE Festival Himesh Nandani Dalhousie University, Mootez Saad Dalhousie University, Tushar Sharma Dalhousie University | ||
16:15 5mTalk | A Component-Sensitive Static Analysis Based Approach for Modeling Intents in Android Apps Artifact Evaluation Track and ROSE Festival Negarsadat Abolhassani University of Southern California, William G.J. Halfond University of Southern California | ||
16:20 5mTalk | Uncovering the Hidden Risks: The Importance of Predicting Bugginess in Untouched Methods Artifact Evaluation Track and ROSE Festival Matteo Esposito University of Rome Tor Vergata, Davide Falessi University of Rome Tor Vergata, Italy | ||
16:25 5mTalk | GPTCloneBench: A comprehensive benchmark of semantic clones and cross-language clones using GPT-3 model and SemanticCloneBench Artifact Evaluation Track and ROSE Festival Ajmain Inqiad Alam University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Farouq Al-omari University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Banani Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan | ||
16:30 5mTalk | RefSearch: A Search Engine for Refactoring Artifact Evaluation Track and ROSE Festival DOI Pre-print Media Attached | ||
16:35 5mTalk | Can We Trust the Default Vulnerabilities Severity? Artifact Evaluation Track and ROSE Festival Matteo Esposito University of Rome Tor Vergata, Sergio Moreschini Tampere University, Valentina Lenarduzzi University of Oulu, David Hastbacka , Davide Falessi University of Rome Tor Vergata, Italy | ||
16:40 5mTalk | ROSE Awards Artifact Evaluation Track and ROSE Festival |
15:30 - 17:00 | Technical Briefing on srcML & srcDiff: Infrastructure to Support Exploring, Analyzing, and Differencing of Source CodeResearch Track at Session 4 Room - RGD 005 Chair(s): Michael J. Decker Bowling Green State University, Jonathan I. Maletic Kent State University This technology briefing is intended for those interested in constructing custom software analysis and manipulation tools to support research. The briefing is also aimed at researchers interested in leveraging syntactic differencing in their investigations. srcML (srcML.org) is an infrastructure consisting of an XML representation for C/C++/C#/Java source code along with efficient parsing technology to convert source code to-and-from the srcML format. srcDiff (srcDiff.org) is an infrastructure supporting syntactic source-code differencing and change analysis. srcDiff leverages srcML along with an efficient differencing algorithm to produce deltas that accurately model developer edits. In this tech briefing, we give an overview of srcML and srcDiff along with a tutorial of how to use them to support research efforts. The briefing is also a forum to seek feedback and input from the community on what new enhancements and features will better support software engineering research. | ||
Thu 5 OctDisplayed time zone: Bogota, Lima, Quito, Rio Branco change
10:30 - 12:00 | Software Testing - 1Research Track / Industry Track at Session 1 Room - RGD 004 Chair(s): Amjed Tahir Massey University | ||
10:30 16mTalk | GMBFL: Optimizing Mutation-Based Fault Localization via Graph Representation Research Track Shumei Wu Beijing University of Chemical Technology, Zheng Li Beijing University of Chemical Technology, Yong Liu Beijing University of Chemical Technology, Xiang Chen Nantong University, Mingyu Li Beijing University of Chemical Technology | ||
10:46 16mTalk | Characterizing the Complexity and Its Impact on Testing in ML-Enabled Systems - A Case Study on Rasa Research Track Junming Cao Fudan University, Bihuan Chen Fudan University, Longjie Hu Fudan University, Jie Gao Singapore University of Technology and Design, Kaifeng Huang Fudan University, Xuezhi Song Fudan University, Xin Peng Fudan University | ||
11:02 16mTalk | Software Testing and Code Refactoring: A Survey with Practitioners Industry Track Danilo Lima , Ronnie de Souza Santos University of Calgary, Guilherme Pires , Sildemir Silva , César França Federal Rural University of Pernambuco (UFRPE), Luiz Fernando Capretz Western University | ||
11:18 16mTalk | A manual categorization of new quality issues on automatically-generated tests Research Track Geraldine Galindo-Gutierrez Exact Sciences and Engineering Research Center (CICEI) - Bolivian Catholic University, Maximiliano Narea Carvajal Pontificia Universidad Católica de Chile, Alison Fernandez Blanco Pontificia Universidad Católica de Chile, Nicolas Anquetil University of Lille, Lille, France, Juan Pablo Sandoval Alcocer Pontificia Universidad Católica de Chile | ||
11:34 16mTalk | Revisiting Machine Learning based Test Case Prioritization for Continuous Integration Research Track | ||
11:50 10mLive Q&A | 1:1 Q&A Research Track |
10:30 - 12:00 | Software ChangesResearch Track / Journal First Track / Industry Track / Tool Demo Track at Session 2 Room - RGD 04 Chair(s): Tushar Sharma Dalhousie University, Shurui Zhou University of Toronto | ||
10:30 16mTalk | CCBERT: Self-Supervised Code Change Representation Learning Research Track Xin Zhou Singapore Management University, Singapore, Bowen Xu North Carolina State University, DongGyun Han Royal Holloway, University of London, Zhou Yang Singapore Management University, Junda He Singapore Management University, David Lo Singapore Management University Pre-print | ||
10:46 16mTalk | Identifying Defect-Inducing Changes in Visual Code Industry Track Pre-print | ||
11:02 16mTalk | On the Relation of Method Popularity to Breaking Changes in the Maven Ecosystem Journal First Track Mehdi Keshani Delft University of Technology, Simcha Vos Delft University of Technology, Sebastian Proksch Delft University of Technology, Netherlands Link to publication | ||
11:18 11mTalk | Wait, wasn't that code here before? Detecting Outdated Software Documentation Tool Demo Track Wen Siang Tan The University of Adelaide, Markus Wagner Monash University, Australia, Christoph Treude University of Melbourne | ||
11:29 16mTalk | Recommending Code Reviews Leveraging Code Changes with Structured Information Retrieval Research Track Ohiduzzaman Shuvo Dalhousie University, Parvez Mahbub Dalhousie University, Masud Rahman Dalhousie University | ||
11:45 15mLive Q&A | 1:1 Q&A Research Track |
13:30 - 15:00 | Security and Program RepairResearch Track / Industry Track at Session 1 Room - RGD 004 Chair(s): Quentin Stiévenart Université du Québec à Montréal (UQAM), Ashkan Sami Edinburgh Napier University | ||
13:30 16mTalk | Enhancing Code Language Models for Program Repair by Curricular Fine-tuning Framework Research Track Sichong Hao Faculty of Computing, Harbin Institute of Technology, Xianjun Shi Faculty of Computing, Harbin Institute of Technology, Hongwei Liu Faculty of Computing, Harbin Institute of Technology, Yanjun Shu Faculty of Computing, Harbin Institute of Technology | ||
13:46 16mTalk | ScaleFix: An Automated Repair of UI Scaling Accessibility Issues in Android Applications Research Track Ali S. Alotaibi University of Southern California, Paul T. Chiou University of Southern California, Fazle Mohammed Tawsif University of Southern California, William G.J. Halfond University of Southern California | ||
14:02 16mTalk | Finding an Optimal Set of Static Analyzers To Detect Software Vulnerabilities Industry Track Jiaqi He University of Alberta, Revan MacQueen University of Alberta, Natalie Bombardieri University of Alberta, Karim Ali University of Alberta, James Wright University of Alberta, Cristina Cifuentes Oracle Labs | ||
14:18 16mTalk | DockerCleaner: Automatic Repair of Security Smells in Dockerfiles Research Track Quang-Cuong Bui Hamburg University of Technology, Malte Laukötter Hamburg University of Technology, Riccardo Scandariato Hamburg University of Technology Pre-print | ||
14:34 16mTalk | Exploring Security Commits in Python Research Track Shiyu Sun George Mason University, Shu Wang George Mason University, Xinda Wang George Mason University, Yunlong Xing George Mason University, Elisa Zhang Dougherty Valley High School, Kun Sun George Mason University Pre-print | ||
14:50 10mLive Q&A | 1:1 Q&A Research Track |
15:30 - 17:00 | Software FaultsIndustry Track / Research Track / Journal First Track at Session 1 Room - RGD 004 Chair(s): Masud Rahman Dalhousie University, Ashkan Sami Edinburgh Napier University | ||
15:30 16mTalk | An Empirical Study on Fault Diagnosisa in Robotic Systems Research Track Xuezhi Song Fudan University, Yi Li , Zhen Dong Fudan University, China, Shuning Liu Fudan University, Junming Cao Fudan University, Xin Peng Fudan University | ||
15:46 16mTalk | Predicting Defective Visual Code Changes in a Multi-Language AAA Video Game Project Industry Track Pre-print | ||
16:02 16mTalk | An annotation-based approach for finding bugs in neural network programs Journal First Track Mohammad Rezaalipour Software Institute @ USI, Carlo A. Furia Università della Svizzera italiana (USI) | ||
16:18 11mTalk | Evaluation of Cross-Lingual Bug Localization: Two Industrial Cases Industry Track Shinpei Hayashi Tokyo Institute of Technology, Takashi Kobayashi Tokyo Institute of Technology, Tadahisa Kato Hitachi, Ltd. DOI Pre-print | ||
16:29 16mTalk | An Empirical Study on Bugs Inside PyTorch: A Replication Study Research Track Sharon Chee Yin Ho Concordia University, Vahid Majdinasab Polytechnique Montréal, Mohayeminul Islam University of Alberta, Diego Costa Concordia University, Canada, Emad Shihab Concordia Univeristy, Foutse Khomh Polytechnique Montréal, Sarah Nadi University of Alberta, Muhammad Raza Queen's University | ||
16:45 15mLive Q&A | 1:1 Q&A Research Track |
15:30 - 17:00 | Program AnalysisResearch Track / Journal First Track / Industry Track at Session 2 Room - RGD 04 Chair(s): Fabio Petrillo École de technologie supérieure (ÉTS), Montréal -- Université du Québec, Mark Hills Appalachian State University | ||
15:30 16mTalk | Slicing Shared-Memory Concurrent Programs, The Threaded System Dependence Graph Revisited Research Track Carlos Galindo Universitat Politècnica de València, Marisa Llorens Universitat Politècnica de València, Sergio Perez Rubio Universitat Politècnica de València, Josep Silva Universitat Politècnica de València | ||
15:46 16mTalk | An Expressive and Modular Layer Activation Mechanism for Context-Oriented Programming Journal First Track Paul Leger Universidad Católica del Norte, Chile, Nicolás Cardozo Universidad de los Andes, Hidehiko Masuhara Tokyo Institute of Technology Link to publication DOI | ||
16:02 16mTalk | Dynamic Slicing of WebAssembly Binaries Research Track Quentin Stiévenart Université du Québec à Montréal (UQAM), David Binkley Loyola University Maryland, Coen De Roover Vrije Universiteit Brussel Pre-print | ||
16:18 11mTalk | OLA: Property Directed Outer Loop Abstraction for Efficient Verification of Reactive Systems Industry Track | ||
16:29 16mTalk | A Component-Sensitive Static Analysis Based Approach for Modeling Intents in Android Apps Research Track Negarsadat Abolhassani University of Southern California, William G.J. Halfond University of Southern California | ||
16:45 15mLive Q&A | 1:1 Q&A Research Track |
Fri 6 OctDisplayed time zone: Bogota, Lima, Quito, Rio Branco change
10:30 - 12:00 | Software Testing - 2Tool Demo Track / Industry Track / Research Track / New Ideas and Emerging Results Track at Session 1 Room - RGD 004 Chair(s): Nicolas Archila , Amjed Tahir Massey University | ||
10:30 16mTalk | A Guided Mutation Strategy for Smart Contract Fuzzing Research Track Songyan Ji Harbin Institute of Technology, Jian Dong Harbin Institute of Technology, Jin Wu , Lishi Lu Harbin Institute of Technology | ||
10:46 11mTalk | How Developers Implement Property-Based Tests New Ideas and Emerging Results Track Arthur Corgozinho Federal University of Minas Gerais (UFMG), Henrique Rocha Loyola University Maryland, USA, Marco Tulio Valente Federal University of Minas Gerais, Brazil | ||
10:57 16mTalk | Cost Reduction on Testing Evolving Cancer Registry System Industry Track Erblin Isaku Simula Research Laboratory, and University of Oslo (UiO), Hassan Sartaj Simula Research Laboratory, Christoph Laaber Simula Research Laboratory, Tao Yue Beihang University, Shaukat Ali Simula Research Laboratory and Oslo Metropolitan University, Thomas Schwitalla Cancer Registry of Norway, Jan F. Nygård Cancer Registry of Norway Pre-print | ||
11:13 11mTalk | aNNoTest: An Annotation-based Test Generation Tool for Neural Network Programs Tool Demo Track Mohammad Rezaalipour Software Institute @ USI, Carlo A. Furia Università della Svizzera italiana (USI) | ||
11:24 16mTalk | Specification-based Test Case Generation for C++ Engineering Software Industry Track Michael Moser Software Competence Center Hagenberg GmbH, Michael Pfeiffer , Christina Piereder , Peter Hamberger , Thomas Luger , Claus Klammer | ||
11:40 11mTalk | Artisan: An Action-Based Test Carving Tool for Android Apps Tool Demo Track Alessio Gambi IMC University of Applied Sciences Krems, Mengzhen Li University of Minnesota, Mattia Fazzini University of Minnesota | ||
11:51 9mLive Q&A | 1:1 Q&A Research Track |
Accepted Papers
Title | |
---|---|
Does Microservices Adoption Impact the Development Velocity? A Cohort Study. A Registered Report Registered Reports Track | |
Investigating the Impact of Vocabulary Difficulty and Code Naturalness on Program Comprehension Registered Reports Track | |
Test Code Refactoring Unveiled: Where and How Does It Affect Test Code Quality and Effectiveness? Registered Reports Track |
Author’s Guide
NB: Please contact the ICSME RR track chairs with any questions, feedback, or requests for clarification. Specific analysis approaches mentioned below are intended as examples, not mandatory components.
I. Title (required)
Provide the working title of your study. It may be the same title that you submit for publication of your final manuscript, but it is not mandatory.
Example: Should your family travel with you on the enterprise? Subtitle (optional): Effect of accompanying families on the work habits of crew members
II. Authors (required)
At this stage, we believe that a single anonymous review is most productive
III. Structured Abstract (required)
The abstract should describe the following in 200 words or so:
- Background/Context
What is your research about? Why are you doing this research, why is it interesting?
Example: “The enterprise is the flag ship of the federation, and it allows families to travel onboard. However, there are no studies that evaluate how this affects the crew members.”
- Objective/Aim What exactly are you studying/investigating/evaluating? What are the objects of the study? We welcome both confirmatory and exploratory types of studies.
Example (Confirmatory): We evaluate whether the frequency of sick days, work effectiveness and efficiency differ between science officers who bring their family with them, compared to science officers who are serving without their family.
Example (Exploratory): We investigate the problem of frequent Holodeck use in interpersonal relationships with an ethnographic study using participant observation, in order to derive specific hypotheses about Holodeck usage.
- Method
How are you addressing your objective? What data sources are you using?
Example: We conduct an observational study and use a between-subject design. To analyze the data, we use a t-test or Wilcoxon test, depending on the underlying distribution. Our data comes from computer monitoring of Enterprise crew members.
IV. Introduction
Give more details on the bigger picture of your study and how it contributes to this bigger picture. An important component of Phase 1 review is assessing the importance and relevance of the study questions, so be sure to explain this.
V. Hypotheses (required for confirmatory study) or research questions
Clearly state the research hypotheses that you want to test with your study, and a rationalization for the hypotheses.
Hypothesis example: Science officers with their family on board have more sick days than science officers without their family
Hypothesis rationale: Since toddlers are often sick, we can expect that crew members with their family on board need to take sick days more often.
VI. Variables (required for confirmatory study)
- Independent Variable(s) and their operationalization
- Dependent Variable(s) and their operationalization (e.g., time to solve a specified task)
- Confounding Variable(s) and how their effect will be controlled (e.g., species type (Vulcan, Human, Tribble) might be a confounding factor; we control for it by separating our sample additionally into Human/Non-Human and using an ANOVA (normal distribution) or Friedman (non-normal distribution) to distill its effect).
For each variable, you should give: – name (e.g., presence of family) – abbreviation (if you intend to use one) – description (whether the family of the crew members travels on board) – scale type (nominal: either the family is present or not) – operationalization (crew members without family on board vs. crew members with family onboard)
VII. Participants/Subjects/Datasets (required)
Describe how and why you select the sample. When you conduct a meta-analysis, describe the primary studies / work on which you base your meta-analysis.
Example: We recruit crew members from the science department on a voluntary basis. They are our targeted population.
VIII. Execution Plan (required)
Describe the experimental setting and procedure. This includes the methods/tools that you plan to use (be specific on whether you developed it (and how) or whether it is already defined), and the concrete steps that you plan to take to support/reject the hypotheses or answer the research questions.
Example: Each crew member needs to sign the informed consent and agreement to process their data according to GDPR. Then, we conduct the interviews. Afterward, participants need to complete the simulated task …
Examples:
Confirmatory: https://osf.io/5fptj/ – Do Explicit Review Strategies Improve Code Review Performance?
Exploratory: https://osf.io/kfu9t – The Impact of Dynamics of Collaborative Software Engineering on Introverts: A
Study Protocol: https://osf.io/acnwk – Large-Scale Manual Validation of Bugfixing Changes
Submission Link
Please use the following link: https://easychair.org/my/conference?conf=icsme2023
Call for Registrations
Empirical Software Engineering Journal (EMSE), in conjunction with the International Conference on Software Maintenance and Evolution (ICSME), is continuing the Registered Reports (RR) track.
The RR track of ICSME 2023 has two goals: (1) to prevent HARKing (hypothesizing after the results are known) for empirical studies; (2) to provide early feedback to authors in their initial study design. For papers submitted to the RR track, methods and proposed analyses are reviewed prior to execution. Pre-registered studies follow a two-step process:
- Stage 1: A report is submitted that describes the planned study. The submitted report is evaluated by the reviewers of the RR track of ICSME 2023. Authors of accepted pre-registered studies will be given the opportunity to present their work at ICSME.
- Stage 2: Once a report has passed Phase 1, the study will be conducted and actual data collection and analysis take place. The results may also be negative! The full paper is submitted for review to EMSE.
Paper Types, Evaluation Criteria, and Acceptance Types
The RR track of ICSME 2023 supports two types of papers:
Confirmatory: The researcher has a fixed hypothesis (or several fixed hypotheses) and the objective of the study is to find out whether the hypothesis is supported by the facts/data. An example of a completed confirmatory study:
Inozemtseva, L., & Holmes, R. (2014, May). Coverage is not strongly correlated with test suite effectiveness. In Proceedings of the 36th international conference on software engineering (pp. 435-445).
Exploratory: The researcher does not have a hypothesis (or has one that may change during the study). Often, the objective of such a study is to understand what is observed and answer questions such as WHY, HOW, WHAT, WHO, or WHEN. We include in this category registrations for which the researcher has an initial proposed solution for an automated approach (e.g., a new deep-learning-based defect prediction approach) that serves as a starting point for his/her exploration to reach an effective solution. Examples of completed exploratory studies:
Gousios, G., Pinzger, M., & Deursen, A. V. (2014, May). An exploratory study of the pull-based software development model. In Proceedings of the 36th International Conference on Software Engineering (pp. 345-355).
Rodrigues, I. M., Aloise, D., Fernandes, E. R., & Dagenais, M. (2020, June). A Soft Alignment Model for Bug Deduplication. In Proceedings of the 17th International Conference on Mining Software Repositories (pp. 43-53).
The reviewers will evaluate RR track submissions based on the following criteria:
- The importance of the research question(s).
- The logic, rationale, and plausibility of the proposed hypotheses.
- The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis where appropriate).
- (For confirmatory study) Whether the clarity and degree of methodological detail are sufficient to exactly replicate the proposed experimental procedures and analysis pipeline.
- (For confirmatory study) Whether the authors have pre-specified sufficient outcome-neutral tests for ensuring that the results obtained can test the stated hypotheses, including positive controls and quality checks.
- (For exploratory study, if applicable) The description of the data set that is the base for exploration.
The outcome of the RR report review is one of the following:
- In-Principal Acceptance (IPA): The reviewers agree that the study is relevant, the outcome of the study (whether confirmation / rejection of hypothesis) is of interest to the community, the protocol for data collection is sound, and that the analysis methods are adequate. The authors can engage in the actual study for Stage 2. If the protocol is adhered to (or deviations are thoroughly justified), the study is published. Of course, this being a journal submission, a revision of the submitted manuscript may be necessary. Reviewers will especially evaluate how precisely the protocol of the accepted pre-registered report is followed, or whether deviations are justified.
- Continuity Acceptance (CA): The reviewers agree that the study is relevant, that the (initial) methods appear to be appropriate. However, for exploratory studies, implementation details and post-experiment analyses or discussion (e.g., why the proposed automated approach does not work) may require follow-up checks. We’ll try our best to get the original reviewers. All PC members will be invited on the condition that they agree to review papers in both, Stage 1 and Stage 2. Four (4) PC members will review the Stage 1 submission, and three (3) will review the Stage 2 submission.
- Rejection: The reviewers do not agree on the relevance of the study or are not convinced that the study design is sufficiently mature. Comments are provided to the authors to improve the study design before starting it.
Note: For ICSME 2023, only confirmatory studies are granted an IPA. Exploratory study in software engineering often cannot be adequately assessed until after the study has been completed and the findings are elaborated and discussed in a full paper. For example, consider a study in an RR proposing defect prediction using a new deep learning architecture. This work falls under the exploratory category. It is difficult to offer IPA, as we do not know whether it is any better than a traditional approach based on e.g., decision trees. Negative results are welcome; however, it is important that the negative results paper goes beyond presenting “we tried and failed”, but rather provide interesting insights to readers, e.g., why the results are negative or what that means for further studies on this topic (following criteria of REplication and Negative Results (RENE) tracks, e.g., https://saner2019.github.io/cfp/RENETrack.html). Furthermore, it is important to note that authors are required to document all deviations (if any) in a section of the paper
Submission Process and Instructions
The timeline for ICSME 2023 RR track will be as follows:
June 2: Authors submit their initial report. * Submissions must not exceed 6 pages (plus 1 additional page of references). The page limit is strict. * Submissions must conform to the two-column IEEE formatting instructions IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options).
July 7: Authors receive PC members’ reviews.
July 21: Authors submit a response letter + revised report in a single PDF.
- The response letter should address reviewer comments and questions.
- The response letter + revised report must not exceed 12 pages (plus 1 additional page of references).
- The response letter does not need to follow IEEE formatting instructions.
August 11: Notification of Stage 1
- (Outcome: acceptance (CA/IPA) or rejection).
August 18: Authors submit their accepted RR report to arXiv.
- To be checked by PC members for Stage 2
Note: Due to the timeline, RR reports will not be published in the ICSME 2023 proceedings. However, the authors will present their RR at the conference.
Before May 31, 2024: Authors submit a full paper to EMSE. Instructions will be provided later. However, the following constraints will be enforced:
- Justifications need to be given for any change of authors. If the authors are added/removed or the author order is changed between the original Stage 1 and the EMSE submission, all authors will need to complete and sign a “Change of authorship request form”. The Editors in Chief of EMSE and chairs of the RR track reserve the right to deny author changes. If you anticipate any authorship changes please reach out to the chairs of the RR track as early as possible.
- PC members who reviewed an RR report in Stage 1 and their directly supervised students cannot be added as authors of the corresponding submission in Stage 2.
Submissions can be made via the submission site (https://easychair.org/my/conference?conf=icsme2023) by the submission deadline. Any submission that does not comply with the instructions above and the mandatory information specified in the Author Guide will likely be desk rejected. In addition, by submitting, the authors acknowledge that they are aware of and agree to be bound by the following policies:
- The IEEE Plagiarism policy. In particular, papers submitted to ICSME 2023 must not have been published elsewhere and must not be under review or submitted for review elsewhere whilst under consideration for ICSME 2023. Contravention of this concurrent submission policy will be deemed a serious breach of scientific ethics, and appropriate action will be taken in all such cases (including immediate rejection and reporting of the incident to IEEE). To check for double submission and plagiarism issues, the chairs reserve the right to (1) share the list of submissions with the PC Chairs of other conferences with overlapping review periods and (2) use external plagiarism detection software, under contract to the IEEE, to detect violations of these policies.
- The authorship policy of the IEEE.