Write a Blog >>
ESEM 2021
Mon 11 - Fri 15 October 2021

Call for Registrations

Following the successful experiences of 2020, Empirical Software Engineering journal (EMSE), is willing to introduce the Registered Reports track within the ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). The RR track of ESEM 2021 has two goals: (1) to prevent HARKing (hypothesizing after the results are known) for empirical studies; (2) to provide early feedback to authors on their initial study design. For papers submitted to the RR track, methods and proposed analyses are reviewed prior to execution. Pre-registered studies follow a two-step process:

  • Stage 1: A report is submitted that describes the planned study. The submitted report is evaluated by the reviewers of the RR track of ESEM 2021. Authors of accepted pre-registered studies will be given the opportunity to present their work at ESEM.
  • Stage 2: Once a report has passed Phase 1, the study will be conducted, and actual data collection and analysis take place. The results may also be negative! The full paper is submitted for review to EMSE Journal.

See the associated Author’s Guide. Please contact the ESEM Registered Reports track chairs – Maria Teresa Baldassarre, Neil Ernst, or Jeff Carver - for any questions, clarifications, or comments.

Paper Types, Evaluation Criteria, and Acceptance Types

The RR track of ESEM 2021 supports two types of papers:

Confirmatory: The researcher has a fixed hypothesis (or several fixed hypotheses) and the objective of the study is to find out whether the hypothesis is supported by the facts/data.

An example of a completed confirmatory study:

  • Inozemtseva, L., & Holmes, R. (2014, May). Coverage is not strongly correlated with test suite effectiveness. In Proceedings of the 36th international conference on software engineering (pp. 435-445).

Exploratory: The researcher does not have a hypothesis (or has one that may change during the study). Often, the objective of such a study is to understand what is observed and answer questions such as WHY, HOW, WHAT, WHO, or WHEN. We include in this category registrations for which the researcher has an initial proposed solution for an automated approach (e.g., a new deep-learning-based defect prediction approach) that serves as a starting point for his/her exploration to reach an effective solution.

Examples of completed exploratory studies:

  • Gousios, G., Pinzger, M., & Deursen, A. V. (2014, May). An exploratory study of the pull-based software development model. In Proceedings of the 36th International Conference on Software Engineering (pp. 345-355).
  • Rodrigues, I. M., Aloise, D., Fernandes, E. R., & Dagenais, M. (2020, June). A Soft Alignment Model for Bug Deduplication. In Proceedings of the 17th International Conference on Mining Software Repositories (pp. 43-53).

The reviewers will evaluate RR track submissions based on the following criteria:

  • The importance of the research question(s).
  • The logic, rationale, and plausibility of the proposed hypotheses.
  • The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis where appropriate).
  • (For confirmatory study) Whether the clarity and degree of methodological detail is sufficient to exactly replicate the proposed experimental procedures and analysis pipeline.
  • (For confirmatory study) Whether the authors have pre-specified sufficient outcome-neutral tests for ensuring that the results obtained can test the stated hypotheses, including positive controls and quality checks.
  • (For exploratory study, if applicable) The description of the data set that is the base for exploration.

The outcome of the RR report review is one of the following:

  • In-Principal Acceptance (IPA): The reviewers agree that the study is relevant, the outcome of the study (whether confirmation / rejection of hypothesis) is of interest to the community, the protocol for data collection is sound, and that the analysis methods are adequate. The authors can engage in the actual study for Stage 2. If the protocol is adhered to (or deviations are thoroughly justified), the study is published. Of course, this being a journal submission, a revision of the submitted manuscript may be necessary. Reviewers will especially evaluate how precisely the protocol of the accepted pre-registered report is followed, or whether deviations are justified.
  • Continuity Acceptance (CA): The reviewers agree that the study is relevant, that the (initial) methods appear to be appropriate. However, for exploratory studies, implementation details and post-experiment analyses or discussion (e.g., why the proposed automated approach does not work) may require follow-up checks. We’ll try our best to get the original reviewers. All PC members will be invited on the condition that they agree to review papers in both, Stage 1 and Stage 2. Four (4) PC members will review the Stage 1 submission, and three (3) will review the Stage 2 submission.
  • Rejection: The reviewers do not agree on the relevance of the study or are not convinced that the study design is sufficiently mature. Comments are provided to the authors to improve the study design before starting it.

Note: For ESEM 2021, we will only offer IPA to confirmatory studies. Exploratory studies in software engineering often cannot be adequately assessed until after the study has been completed and the findings are elaborated and discussed in a full paper. For example, consider a study in an RR proposing defect prediction using a new deep learning architecture. This work falls under the exploratory category. It is difficult to offer IPA, as we do not know whether it is any better than a traditional approach based on e.g., decision trees. Negative results are welcome; however, it is important that the negative results paper goes beyond presenting “we tried and failed”, but rather provides interesting insights to readers, e.g., why the results are negative or what that means for further studies on this topic (for example, following criteria of REplication and Negative Results (RENE) tracks, e.g., https://saner2019.github.io/cfp/RENETrack.html).

Submission Process and Instructions

The timeline for ESEM 2021 RR track will be as follows:

June 30: Authors submit their initial report. * Submissions must not exceed 6 pages (plus 1 additional page of references). The page limit is strict. * Submissions must be formatted according to the ACM proceedings template, which can be found at ACM Proceedings Template (https://www.acm.org/publications/proceedings-template). Use the Sigconf template.

August 17: Authors receive reviews.

August 30: Authors submit a response letter + revised report in a single PDF.

  • The response letter should address reviewer comments and questions.
  • The response letter + revised report must not exceed 12 pages (plus 1 additional page of references).
  • The response letter does not need to follow ACM formatting instructions.

September 20: Notification of Stage 1

  • (Outcome: in-principal acceptance, continuity acceptance, or rejection).

September 30: Authors submit their accepted RR report to arXiv

  • To be checked by PC members for Stage 2
  • Note: Due to the timeline, RR reports will not be published in the ESEM 2021 proceedings. Authors will present their RR during the conference either with live presentation or pre-recorded.

Before June 30, 2022: Authors submit a full paper to EMSE. Instructions will be provided later. However, the following constraints will be enforced:

  • Justifications need to be given to any change of authors. If the authors are added/removed or the author order is changed between the original Stage 1 and the EMSE submission, all authors will need to complete and sign a “Change of authorship request form”. The Editors in Chief of EMSE and chairs of the RR track reserve the right to deny author changes. If you anticipate any authorship changes please reach out to the chairs of the RR track as early as possible.
  • PC members who reviewed an RR report in Stage 1 and their directly supervised students cannot be added as authors of the corresponding submission in Stage 2.

Submissions can be made via the submission site (tbd) by the submission deadline. Any submission that does not comply with the aforementioned instructions and the mandatory information specified in the Author’s Guide is likely to be desk rejected. In addition, by submitting, the authors acknowledge that they are aware of and agree to be bound by the following policies:

  • The ACM plagiarism policy and procedures (http://www.acm.org/publications/policies/plagiarism_policy). In particular, papers submitted to ESEM 2021 must not have been published elsewhere and must not be under review or submitted for review elsewhere whilst under consideration for ESEM 2021. Contravention of this concurrent submission policy will be deemed a serious breach of scientific ethics, and appropriate action will be taken in all such cases (including immediate rejection and reporting of the incident to ACM). To check for double submission and plagiarism issues, the chairs reserve the right to (1) share the list of submissions with the PC Chairs of other conferences with overlapping review periods and (2) use external plagiarism detection software, under contract to the ACM, to detect violations of these policies.
  • The IEEE Policy on Authorship (http://ieeeauthorcenter.ieee.org/publish-with-ieee/publishing-ethics/).
Dates
Tracks
You're viewing the program in a time zone which is different from your device's time zone - change time zone

Tue 12 Oct

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

14:20 - 15:15
Testing & Security 1Technical Papers / Emerging Results and Vision papers at ESEM ROOM
Chair(s): Davide Fucci Blekinge Institute of Technology
14:20
15m
Talk
A comparative study of vulnerability reporting by software composition analysis tools
Technical Papers
Nasif Imtiaz North Carolina State University, Seaver Thorn North Carolina State University, Laurie Williams North Carolina State University
Pre-print Media Attached
14:35
15m
Talk
An Empirical Study of Rule-Based and Learning-Based Approaches for Static Application Security Testing
Technical Papers
Roland Croft , Dominic Newlands University of Adelaide, Ziyu Chen Monash University, Muhammad Ali Babar University of Adelaide
Pre-print Media Attached
14:50
15m
Talk
An Empirical Analysis of Practitioners' Perspectives on Security Tool Integration into DevOps
Technical Papers
Roshan Namal Rajapakse The University of Adelaide, Mansooreh Zahedi University of Adelaide, Muhammad Ali Babar University of Adelaide
Pre-print
15:05
10m
Talk
Why Some Bug-bounty Vulnerability Reports are Invalid?
Emerging Results and Vision papers
Saman Shafigh University of New South Wales, Boualem Benatallah University of New South Wales, Carlos Rodriguez University of New South Wales, Mortada Al-Banna University of New South Wales
15:30 - 16:35
Testing & Security 2Technical Papers / Emerging Results and Vision papers at ESEM ROOM
Chair(s): Davide Fucci Blekinge Institute of Technology
15:30
15m
Talk
Barriers to Shift-Left Security: The Unique Pain Points of Writing Automated Tests Involving Security Controls
Technical Papers
Danielle Gonzalez Rochester Institute of Technology and Microsoft, Paola Peralta Perez Rochester Institute of Technology, Mehdi Mirakhorli Rochester Institute of Technology
DOI
15:45
15m
Talk
Security Smells Pervade Mobile App Servers
Technical Papers
Pascal Gadient University of Bern, Marc-Andrea Tarnutzer University of Bern, Oscar Nierstrasz University of Bern, Switzerland, Mohammad Ghafari University of Auckland
Pre-print
16:00
15m
Talk
Who are Vulnerability Reporters? A Large-scale Empirical Study on FLOSS
Technical Papers
Nikolaos Alexopoulos Technical University of Darmstadt, Andy Meneely Rochester Institute of Technology, Dorian Arnouts Technical University of Darmstadt, Max Mühlhäuser Technical University of Darmstadt
Pre-print
16:15
10m
Talk
Python Crypto Misuses in the Wild
Emerging Results and Vision papers
Anna-Katharina Wickert TU Darmstadt, Germany, Lars Baumgärtner TU Darmstadt, Florian Breitfelder TU Darmstadt, Mira Mezini TU Darmstadt, Germany
Pre-print Media Attached
16:25
10m
Talk
Web Application Testing: Using Tree Kernels to Detect Near-duplicate States in Automated Model Inference
Emerging Results and Vision papers
Anna Corazza Università degli Studi di Napoli Federico II, Sergio Di Martino Università degli Studi di Napoli Federico II, Adriano Peron Università degli Studi di Napoli Federico II, Luigi Libero Lucio Starace Università degli Studi di Napoli Federico II
Pre-print Media Attached

Wed 13 Oct

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

13:00 - 14:10
Research MethodsEmerging Results and Vision papers / Technical Papers / Journal-first Papers at ESEM ROOM
Chair(s): Tayana Conte Universidade Federal do Amazonas
13:00
15m
Talk
The who, what, how of software engineering research: a socio-technical framework
Journal-first Papers
Margaret-Anne Storey University of Victoria, Neil Ernst University of Victoria, Courtney Williams , Eirini Kalliamvakou University of Victoria
13:15
15m
Talk
What Evidence We would Miss If We Do not Use Grey Literature?
Technical Papers
Fernando Kamei Federal Institute of Alagoas (IFAL), Gustavo Pinto Federal University of Pará (UFPA) and Zup Innovation, Igor Scaliante Wiese Federal University of Technology – Paraná - UTFPR, Márcio Ribeiro Federal University of Alagoas, Brazil, Sergio Soares Informatics Center - CIn/UFPE
Pre-print Media Attached
13:30
10m
Talk
Towards a Methodology for Participant Selection in Software Engineering Experiments. A Vision of the Future
Emerging Results and Vision papers
Valentina Lenarduzzi LUT University, Oscar Dieste Universidad Politécnica de Madrid, Davide Fucci Blekinge Institute of Technology, Sira Vegas Universidad Politecnica de Madrid
Pre-print Media Attached
13:40
10m
Talk
Important Experimentation Characteristics: An Expert Survey
Emerging Results and Vision papers
Florian Auer University of Innsbruck, Michael Felderer University of Innsbruck
13:50
10m
Talk
Inclusion and Exclusion Criteria in Software Engineering Tertiary Studies: A Systematic Mapping and Emerging Framework
Emerging Results and Vision papers
Dolors Costal Universitat Politècnica de Catalunya, Carles Farré Universitat Politècnica de Catalunya, Xavier Franch Universitat Politècnica de Catalunya, Carme Quer Universitat Politècnica de Catalunya
14:00
10m
Talk
Towards Sustainability of Systematic Literature Reviews
Emerging Results and Vision papers
Vinicius Santos University of São Paulo (ICMC/USP), São Carlos - SP, Anderson Y. Iwazaki University of São Paulo (ICMC/USP), São Carlos - SP, Katia Felizardo Federal Technological University of Paraná, Érica F. Souza Federal Technological University of Paraná, Cornélio Procópio - PR, Elisa Yumi Nakagawa University of São Paulo
14:20 - 15:15
Testing & Security 3Emerging Results and Vision papers / Journal-first Papers / Technical Papers at ESEM ROOM
Chair(s): Robert Feldt Chalmers University of Technology, Sweden
14:20
15m
Talk
On (Mis)Perceptions of Testing Effectiveness: An Empirical Study
Journal-first Papers
Sira Vegas Universidad Politecnica de Madrid, Patricia Riofrio , Esperanza Marcos Universidad Rey Juan Carlos, Natalia Juristo Universidad Politecnica de Madrid
14:35
15m
Talk
Testing Smart Contracts: Which Technique Performs Best?
Technical Papers
Sefa Akca Uniersity of Edinburgh, Chao Peng University of Edinburgh, UK, Ajitha Rajan University of Edinburgh
14:50
15m
Talk
Automated isolation for white-box test generation
Journal-first Papers
Dávid Honfi , Zoltán Micskei Budapest University of Technology and Economics
Link to publication DOI
15:05
10m
Talk
Contextual Understanding and Improvement of Metamorphic Testing in Scientific Software Development
Emerging Results and Vision papers
Zedong Peng University of Cincinnati, Upulee Kanewala University of North Florida, Nan Niu University of Cincinnati
15:30 - 16:25
Development Approaches and RequirementsTechnical Papers / Emerging Results and Vision papers at ESEM ROOM
Chair(s): Robert Feldt Chalmers University of Technology, Sweden
15:30
15m
Talk
Why Do Organizations Adopt Agile Scaling Frameworks?— A Survey of Practitioners
Technical Papers
Putta Abheeshta Aalto University, Ömer Uludag Technical University of Munich, Shun Long Hong Technical University of Munich, Maria Paasivaara LUT University, Finland & IT University of Copenhagen, Denmark & Aalto University, Finland, Casper Lassenius Aalto University, Finland and Simula Metropolitan Center for Digital Engineering, Norway
15:45
15m
Talk
A Model of Software Prototyping based on a Systematic Map
Technical Papers
Elizabeth Bjarnason Lund University, Sweden, Franz Lang Department of Computer Science, Lund University, Alexander Mjöberg Department of Computer Science, Lund University
Media Attached
16:00
15m
Talk
A Survey-Based Qualitative Study to Characterize Expectations of Software Developers from Five Stakeholders
Technical Papers
Khalid Hasan Bangladesh University of Engineering and Technology, Partho Chakraborty Bangladesh University of Engineering and Technology Dhaka, Bangladesh, Rifat Shahriyar Bangladesh University of Engineering and Technology Dhaka, Bangladesh, Anindya Iqbal Bangladesh University of Engineering and Technology Dhaka, Bangladesh, Gias Uddin University of Calgary, Canada
16:15
10m
Talk
Vision for an Artefact-based Approach to Regulatory Requirements Engineering
Emerging Results and Vision papers
Oleksandr Kosenkov fortiss GmbH, Michael Unterkalmsteiner Blekinge Institute of Technology, Daniel Mendez Blekinge Institute of Technology, Davide Fucci Blekinge Institute of Technology

Thu 14 Oct

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

13:00 - 14:05
Software Architecture and DesignTechnical Papers / Emerging Results and Vision papers at ESEM ROOM
Chair(s): Davide Taibi Tampere University
13:00
15m
Talk
Tackling Consistency-Related Design Challenges of Distributed Data-Intensive Systems – An Action Research Study
Technical Papers
Susanne Braun Fraunhofer IESE, Stefan Deßloch TU Kaiserslautern, Eberhard Wolff INNOQ, Frank Elberzhager Fraunhofer IESE, Andreas Jedlitschka Fraunhofer
Pre-print Media Attached
13:15
15m
Talk
Facing the Giant: a Grounded Theory Study of Decision-Making in Microservices Migrations
Technical Papers
Hamdy Michael Ayas Chalmers University of Technology | University of Gothenburg, Philipp Leitner Chalmers University of Technology, Sweden / University of Gothenburg, Sweden, Regina Hebig
Pre-print Media Attached
13:30
15m
Talk
The Existence and Co-Modifications of Code Clones within or across Microservices
Technical Papers
Ran Mo Central China Normal University, Yang Zhao Central China Normal University, Qiong Feng Nanjing University of Science and Technology, Zengyang Li Central China Normal University
DOI
13:45
10m
Talk
Study of the Utility Of Text Classification Based Software Architecture Recovery Method RELAX for Maintenance
Emerging Results and Vision papers
Daniel Link University of Southern California, Kamonphop Srisopha University of Southern California, USA, Barry Boehm University of Southern California
Media Attached
13:55
10m
Talk
Semantic Slicing of Architectural Change Commits: Towards Semantic Design Review
Emerging Results and Vision papers
Amit Kumar Mondal University of Saskatchewan, Chanchal K. Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan, Banani Roy University of Saskatchewan, Sristy Sumana Nath University of Saskatchewan
14:20 - 15:15
Development Approaches, Requirements & Behavioral Software EngineeringTechnical Papers / Journal-first Papers / Emerging Results and Vision papers at ESEM ROOM
Chair(s): Valentina Lenarduzzi LUT University
14:20
15m
Talk
Views on Quality Requirements in Academia and Practice: Commonalities, Differences, and Context-Dependent Grey Areas
Journal-first Papers
Andreas Vogelsang University of Cologne, Jonas Eckhardt Technische Universität München, Daniel Mendez Blekinge Institute of Technology, Moritz Berger University of Bonn
14:35
15m
Research paper
Characteristics and Challenges of Low-Code Development: The Practitioners’ Perspective
Technical Papers
Yajing Luo Wuhan University, Peng Liang Wuhan University, Chong Wang Wuhan University, Mojtaba Shahin Monash University, Jing Zhan University of Illinois at Urbana-Champaign
Link to publication DOI Pre-print Media Attached
14:50
15m
Talk
Towards a Human Values Dashboard for Software Development: An Exploratory Study
Technical Papers
Arif Nurwidyantoro Faculty of Information Technology, Monash University, Mojtaba Shahin Monash University, Michel Chaudron Eindhoven University of Technology, The Netherlands, Waqar Hussain Monash University, Harsha Perera Monash University, Rifat Ara Shams Monash University, Jon Whittle CSIRO's Data61 and Monash University
Pre-print Media Attached
15:05
10m
Talk
A Rubric to Identify Misogynistic and Sexist Texts from Software Developer Communications
Emerging Results and Vision papers
Sayma Sultana Wayne State University, Jaydeb Sarker Department of Computer Science, Wayne State University, Amiangshu Bosu Wayne State University
15:30 - 16:00
Defect PredictionTechnical Papers at ESEM ROOM
Chair(s): Valentina Lenarduzzi LUT University
15:30
15m
Talk
Continuous Software Bug Prediction
Technical Papers
Song Wang York University, Junjie Wang Institute of Software at Chinese Academy of Sciences, Jaechang Nam Handong Global University, Nachiappan Nagappan Facebook
Pre-print
15:45
15m
Talk
An Empirical Examination of the Impact of Bias on Just-in-time Defect Prediction
Technical Papers
Jiri Gesi University of California, Irvine, Jiawei Li University of california, Irvine, Iftekhar Ahmed University of California, Irvine
16:00 - 16:30
Registered ReportsRegistered Reports at ESEM ROOM
Chair(s): Jeff Carver University of Alabama
16:00
7m
Talk
To VR or not to VR: Is virtual reality suitable to understand software development metrics?
Registered Reports
David Moreno-Lumbreras Universidad Rey Juan Carlos, Gregorio Robles Universidad Rey Juan Carlos, Daniel Izquierdo Cortazar Bitergia, Jesus M. Gonzalez-Barahona Universidad Rey Juan Carlos
Pre-print
16:07
7m
Talk
Gender Bias in Remote Pair Programming: The twincode exploratory study
Registered Reports
Amador Durán , Pablo Fernandez Universidad de Sevilla, Beatriz Bernárdez Universidad de Sevilla, Nathaniel Weinman UC Berkeley, Aslihan Akalin UC Berkeley, Armando Fox UC Berkeley
Pre-print
16:14
7m
Talk
Adopting Automated Bug Assignment in Practice - A Registered Report of an Industrial Case Study
Registered Reports
Markus Borg RISE Research Institutes of Sweden, Leif Jonsson Ericsson AB, Emelie Engstrom Lund University, Bela Bartalos , Attila Szabo
Pre-print
16:21
7m
Talk
Which Design Decisions in AI-enabled Mobile Applications Contribute to Greener AI?
Registered Reports
Roger Creus Universitat Politècnica de Catalunya, Silverio Martínez-Fernández UPC-BarcelonaTech, Xavier Franch Universitat Politècnica de Catalunya
Pre-print

Fri 15 Oct

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

14:20 - 15:20
Mining Software RepositoriesTechnical Papers at ESEM ROOM
Chair(s): Fabio Calefato University of Bari
14:20
15m
Talk
Characterizing and Predicting Good First Issues
Technical Papers
Yuekai Huang Institute of Software, Chinese Academy of Sciences, Junjie Wang Institute of Software at Chinese Academy of Sciences, Song Wang York University, Zhe Liu Institute of Software at Chinese Academy of Sciences, Dandan Wang Institute of Software, Chinese Academy of Sciences, Qing Wang Institute of Software at Chinese Academy of Sciences
Pre-print
14:35
15m
Talk
An Empirical Study on Refactoring-Inducing Pull Requests
Technical Papers
Flavia Coelho Federal University of Campina Grande, Nikolaos Tsantalis Concordia University, Tiago Massoni Federal University of Campina Grande, Everton L. G. Alves Federal University of Campina Grande
Pre-print Media Attached
14:50
15m
Talk
Promises and Perils of Inferring Personality on GitHub
Technical Papers
Frenk van Mil Delft University of Technology, Ayushi Rastogi University of Groningen, The Netherlands, Andy Zaidman Delft University of Technology
Pre-print Media Attached
15:05
15m
Talk
An Exploratory Study on Dead Methods in Open-source Java Desktop Applications
Technical Papers
Danilo Caivano University of Bari, Pietro Cassieri University of Basilicata, Simone Romano University of Bari, Giuseppe Scanniello University of Basilicata
15:30 - 16:00
Mining Software Repositories & Energy ConsumptionTechnical Papers at ESEM ROOM
Chair(s): Fabio Calefato University of Bari
15:30
15m
Talk
Public Software Development Activity During the Pandemic
Technical Papers
Vanessa Klotzman University of California, Irvine, Farima Farmahinifarahani University of California at Irvine, Crista Lopes University of California, Irvine
15:45
15m
Talk
Evaluating the Impact of Java Virtual Machines on Energy Consumption
Technical Papers
Zakaria Ournani Orange LABS / INRIA / Univ.Lille, Mohammed Chakib Belgaid INRIA, Romain Rouvoy Univ. Lille / Inria / IUF, Pierre Rust Orange labs, Joel Penhoat Orange Labs

NB: Please contact the ESEM RR track chairs with any questions, feedback, or requests for clarification. Specific analysis approaches mentioned below are intended as examples, not mandatory components.

I. Title (required)

Provide the working title of your study. It may be the same title that you submit for publication of your final manuscript, but it is not mandatory.

Example: Should your family travel with you on the enterprise? Subtitle (optional): Effect of accompanying families on the work habits of crew members.

II. Authors (required)

At this stage, we believe that a single blind review is most productive.

III. Structured Abstract (required)

The abstract should describe the following in 200 words or so:

  • Background/Context
    What is your research about? Why are you doing this research, why is it interesting?
    Example: “The enterprise is the flag ship of the federation, and it allows families to travel onboard. However, there are no studies that evaluate how this affects the crew members.”
  • Objective/Aim
    What exactly are you studying/investigating/evaluating? What are the objects of the study? We welcome both confirmatory and exploratory types of studies.
    Example (Confirmatory): We evaluate whether the frequency of sick days, the work effectiveness and efficiency differ between science officers who bring their family with them, compared to science officers who are serving without their family.
    Example (Exploratory): We investigate the problem of frequent Holodeck use on interpersonal relationships with an ethnographic study using participant observation, in order to derive specific hypotheses about Holodeck usage.
  • Method
    How are you addressing your objective? What data sources are you using?
    Example: We conduct an observational study and use a between subject design. To analyze the data, we use a t-test or Wilcoxon test, depending on the underlying distribution. Our data comes from computer monitoring of Enterprise crew members.

IV. Introduction

Give more details on the bigger picture of your study and how it contributes to this bigger picture. An important component of phase 1 review is assessing the importance and relevance of the study questions, so be sure to explain this.

V. Hypotheses (required for confirmatory study) or research questions

Clearly state the research hypotheses that you want to test with your study, and a rationalization for the hypotheses.

Hypothesis: Science officers with their family on board have more sick days than science officers without their family.

Rationale: Since toddlers are often sick, we can expect that crew members with their family onboard need to take sick days more often.

VI. Variables (required for confirmatory study)

  • Independent Variable(s) and their operationalization.
  • Dependent Variable(s) and their operationalization (e.g., time to solve a specified task).
  • Confounding Variable(s) and how their effect will be controlled (e.g., species type (Vulcan, Human, Tribble) might be a confounding factor; we control for it by separating our sample additionally into Human/Non-Human and using an ANOVA (normal distribution) or Friedman (non-normal distribution) to distill its effect).

For each variable, you should give: - name (e.g., presence of family) - abbreviation (if you intend to use one) - description (whether the family of the crew members travels on board) - scale type (nominal: either the family is present or not) - operationalization (crew members without family on board vs. crew members with family onboard).

VII. Participants/Subjects/Datasets (required)

Describe how and why you select the sample. When you conduct a meta-analysis, describe the primary studies / work on which you base your meta-analysis.

Example: We recruit crew members from the science department on a voluntary basis. They are our targeted population.

VIII. Execution Plan (required)

Describe the experimental setting and procedure. This includes the methods/tools that you plan to use (be specific on whether you developed it (and how) or whether it is already defined), and the concrete steps that you plan to take to support/reject the hypotheses or answer the research questions.

Example: Each crew member needs to sign the informed consent and agreement to process their data according to GDPR. Then, we conduct the interviews. Afterwards, participants need to complete the simulated task …

Examples

Confirmatory:

https://osf.io/5fptj/ - Do Explicit Review Strategies Improve Code Review Performance?

Exploratory:

https://osf.io/kfu9t - The Impact of Dynamics of Collaborative Software Engineering on Introverts: A Study Protocol

https://osf.io/acnwk - Large-Scale Manual Validation of Bugfixing Changes