CHASE 2023
Sun 14 - Mon 15 May 2023 Melbourne, Australia
co-located with ICSE 2023

This year, CHASE is starting the Registered Reports (RR) track in conjunction with the Empirical Software Engineering journal (EMSE). RR for CHASE is particularly directed towards our CHASE community, with cooperative and human aspects of software engineering in mind. In addition to the typical SE topics, we welcome surveys, human studies, and social data analysis.

The RR track of CHASE 2023 has two goals:

  1. to prevent HARKing (hypothesizing after the results are known) for empirical studies;
  2. to provide early feedback to authors on their initial study design.

For papers submitted to the RR track, methods and proposed analyses are reviewed prior to execution. Pre-registered studies follow a two-step process:

  • Stage 1: A report is submitted that describes the planned study. The submitted report is evaluated by the reviewers of the RR track of CHASE 2023. Authors of accepted pre-registered studies will be given the opportunity to present their work at CHASE.
  • Stage 2: Once a report has passed Stage 1, the study will be conducted, and actual data collection and analysis will take place. The results may also be negative! The full paper is submitted for review to EMSE as part of the CHASE’23 Special Issue.

See the associated Author’s Guide. Please contact the CHASE 2023 Program Co-chairs chairs for any questions, clarifications, or comments.

Paper Types, Evaluation Criteria, and Acceptance Types

The RR track of CHASE 2023 supports two types of papers:

  • Confirmatory: The researcher, whether quantitative or qualitative, has a fixed hypothesis (or several fixed hypotheses) and the objective of the study is to find out whether the hypothesis is supported by the facts/data. An example of a completed confirmatory study:

Inozemtseva, L., & Holmes, R. (2014, May). Coverage is not strongly correlated with test suite effectiveness. In Proceedings of the 36th international conference on software engineering (pp. 435-445).

  • Exploratory: The researcher does not have a hypothesis (or has one that may change during the study). Often, the objective of such a study is to understand what is observed and answer questions such as WHY, HOW, WHAT, WHO, or WHEN. We include in this category registrations for which the researcher has an initial proposed solution for an automated approach (e.g., a new deep-learning-based defect prediction approach) that serves as a starting point for his/her exploration to reach an effective solution. Examples of completed exploratory studies:

Gousios, G., Pinzger, M., & Deursen, A. V. (2014, May). An exploratory study of the pull-based software development model. In Proceedings of the 36th International Conference on Software Engineering (pp. 345-355).
Rodrigues, I. M., Aloise, D., Fernandes, E. R., & Dagenais, M. (2020, June). A Soft Alignment Model for Bug Deduplication. In Proceedings of the 17th International Conference on Mining Software Repositories (pp. 43-53).

The reviewers will evaluate RR track submissions based on the following criteria:

  • The importance of the research question(s).
  • The logic, rationale, and plausibility of the proposed hypotheses.
  • The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis where appropriate).
  • (For confirmatory study) Whether the clarity and degree of methodological detail is sufficient to exactly replicate the proposed experimental procedures and analysis pipeline.
  • (For confirmatory study) Whether the authors have pre-specified sufficient outcome-neutral tests for ensuring that the results obtained can test the stated hypotheses, including positive controls and quality checks.
  • (For exploratory study, if applicable) The description of the data set that is the base for exploration.

The outcome of the RR report review is one of the following:

  • In-Principle Acceptance (IPA): The reviewers agree that the study is relevant, the outcome of the study (whether confirmation / rejection of hypothesis) is of interest to the community, the protocol for data collection is sound, and that the analysis methods are adequate. The authors can engage in the actual study for Stage 2. If the protocol is adhered to (or deviations are thoroughly justified), the study is published. Of course, this being a journal submission, a revision of the submitted manuscript may be necessary. Reviewers will especially evaluate how precisely the protocol of the accepted pre-registered report is followed, or whether deviations are justified.
  • Continuity Acceptance (CA): The reviewers agree that the study is relevant, that the (initial) methods appear to be appropriate. However, for exploratory studies, implementation details and post-experiment analyses or discussion (e.g., why the proposed automated approach does not work) may require follow-up checks. We’ll try our best to get the original reviewers. All PC members will be invited on the condition that they agree to review papers in both, Stage 1 and Stage 2. Four (4) PC members will review the Stage 1 submission, and three (3) will review the Stage 2 submission.
  • Rejection: The reviewers do not agree on the relevance of the study or are not convinced that the study design is sufficiently mature. Comments are provided to the authors to improve the study design before starting it.

Note: While both confirmatory and exploratory approaches are accepted in principle, the authors must be aware that the RR model is trickier to apply to exploratory studies since predetermining the analysis is more difficult. As such, for CHASE 2023 we will only offer IPA to confirmatory studies. In fact, Exploratory studies in software engineering often cannot be adequately assessed until after the study has been completed and the findings are elaborated and discussed in a full paper. For example, consider a study in an RR proposing defect prediction using a new deep learning architecture. This work falls under the exploratory category. It is difficult to offer IPA, as we do not know whether it is any better than a traditional approach based on e.g., decision trees. Negative results are welcome; however, it is important that the negative results paper goes beyond presenting “we tried and failed”, but rather provides interesting insights to readers, e.g., why the results are negative or what that means for further studies on this topic (following criteria of REplication and Negative Results (RENE) tracks, e.g., https://saner2019.github.io/cfp/RENETrack.html).

Submission Process and Instructions

Follow the submission link below. The timeline for CHASE 2023 RR track will be as follows:

  • Feb. 4: Authors submit the report abstract (not mandatory).
  • Feb. 13: Authors submit their initial report. Submissions must not exceed 6 pages (plus 1 additional page of references). The page limit is strict. Submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options).
  • Mar. 31: Authors receive PC members’ reviews.
  • Apr. 14: Authors submit a rebuttal letter + revised report in a single PDF. The response letter should address reviewer comments and questions. The response letter + revised report must not exceed 12 pages (plus 1 additional page of references). The response letter does not need to follow IEEE formatting instructions.
  • Apr. 24: Final notification of Stage 1. Possible outcomes are: in-principle acceptance, continuity acceptance, or rejection.
  • Apr. 30: Authors submit their accepted RR report to arXiv, to be checked by PC members for Stage 2.

Note: Due to the timeline, RR reports will not be published in the CHASE 2023 proceedings. Authors will present their Stage 1 RR during the conference.

Stage 2 full paper will be submitted for review to EMSE as part of the CHASE’23 Special Issue (importan dates to be announced. Further instructions will be provided during the event. However, the following constraints will be enforced:

  • Justifications need to be given to any change of authors. If the authors are added/removed or the author order is changed between the original Stage 1 and the EMSE submission, all authors will need to complete and sign a “Change of authorship request form”. The Editors in Chief of EMSE and chairs of the RR track reserve the right to deny author changes. If you anticipate any authorship changes please reach out to the chairs of the RR track as early as possible.
  • PC members who reviewed an RR report in Stage 1 and their directly supervised students cannot be added as authors of the corresponding submission in Stage 2.

Submissions can be made via the https://icse2023-chase-rr.hotcrp.com/ by the submission deadline. Any submission that does not comply with the aforementioned instructions and the mandatory information specified in the Author Guide is likely to be desk rejected. In addition, by submitting, the authors acknowledge that they are aware of and agree to be bound by the following policies: The IEEE Plagiarism FAQ and ACM Policy and Procedures on Plagiarism. In particular, papers submitted to CHASE 2023 must not have been published elsewhere and must not be under review or submitted for review elsewhere whilst under consideration for CHASE 2023. Contravention of this concurrent submission policy will be deemed a serious breach of scientific ethics, and appropriate action will be taken in all such cases (including immediate rejection and reporting of the incident to IEEE). To check for double submission and plagiarism issues, the chairs reserve the right to (1) share the list of submissions with the PC Chairs of other conferences with overlapping review periods and (2) use external plagiarism detection software, under contract to the IEEE, to detect violations of these policies.

Submission Link: Please use the following https://icse2023-chase-rr.hotcrp.com/. Make sure to choose the appropriate paper type.

Important Dates

  • Abstract Submission (not mandatory): February 4th, 2023
  • RR Submission: February 13th, 2023
  • First Notification (Authors receive reviews): March 31st, 2023
  • Second Round Submission (Authors submit rebuttal & revised report): April 14th, 2023
  • Final Notification (Authors receive notification of Stage 1): April 24th, 2023
  • Accepted RR Report to arXiv Submission: April 30th, 2023

All submission dates are at 23:59 AoE (Anywhere on Earth, UTC-12).

Dates
Tracks
You're viewing the program in a time zone which is different from your device's time zone change time zone

Sun 14 May

Displayed time zone: Hobart change

09:00 - 10:30
First day opening / KeynoteResearch Track at Meeting Room 103
Chair(s): Fabio Calefato University of Bari
09:00
30m
Day opening
First day opening
Research Track
Igor Steinmacher Northern Arizona University
09:30
60m
Keynote
The Inclusive Developer: Perspectives and Considerations for Building Inclusive Software
Research Track
Daniela Damian University of Victoria
11:00 - 12:30
Resilience & QualityResearch Track at Meeting Room 103
Chair(s): Rashina Hoda Monash University
11:00
20m
Talk
Post-pandemic Resilience of Hybrid Software TeamsFull Paper
Research Track
Ronnie de Souza Santos Cape Breton University, Gianisa Adisaputri Dalhousie University, Paul Ralph Dalhousie University
Pre-print
11:20
20m
Talk
On the perceived relevance of critical internal quality attributes when evolving software featuresFull Paper
Research Track
Eduardo Fernandes Federal University of Minas Gerais (UFMG), Marcos Kalinowski Pontifical Catholic University of Rio de Janeiro (PUC-Rio)
Pre-print
11:40
20m
Talk
What's behind tight deadlines? Business causes of technical debtNIER paper
Research Track
Rodrigo Rebouças de Almeida Federal University of Paraiba, Christoph Treude University of Melbourne, Uirá Kulesza Federal University of Rio Grande do Norte
Pre-print
12:00
20m
Talk
Accounting for socio-technical resilience in software engineeringNIER paper
Research Track
Tamara Lopez The Open University, Helen Sharp The Open University, Michel Wermelinger The Open University, Melanie Langer Lancaster University, Mark Levine Lancaster University, Caroline Jay Department of Computer Science, University of Manchester, M13 9PL, United Kingdom, Yijun Yu The Open University, UK, Bashar Nuseibeh The Open University, UK; Lero, University of Limerick, Ireland
Pre-print
13:45 - 15:15
Collaboration & Human Factors IResearch Track / J1C2 at Meeting Room 103
Chair(s): Marcos Kalinowski Pontifical Catholic University of Rio de Janeiro (PUC-Rio)
13:45
20m
Talk
Exploring a Research Agenda for Design Knowledge Capture in MeetingsNIER paper
Research Track
Liz Seero Colorado College, Adriana Meza Soria UC Irvine, Andre van der Hoek University of California, Irvine, Janet Burge Colorado College
Pre-print
14:05
20m
Talk
Applying Human Values Theory to Software Engineering Practice: Lessons and ImplicationsJ1C2
J1C2
Maria Angela Ferrario Queen's University Belfast, Emily Winter Lancaster University
Link to publication DOI Media Attached File Attached
14:25
20m
Talk
Like, dislike, or just do it? How developers approach software development tasksJ1C2
J1C2
Zainab Masood Prince Sultan University, Rashina Hoda Monash University, Kelly Blincoe University of Auckland, Daniela Damian University of Victoria
Link to publication DOI
14:45
20m
Talk
An Exploratory Study of the Benefits of Time-bounded Collaborative Events for Startup FoundersFull Paper
Research Track
André Miranda UFPA, Kiev Gama UFPE, Cleidson de Souza Vale Institute of Technology and Federal University of Pará Belém, Brazil
Pre-print
15:45 - 17:15
OSS & Knowledge Communities / ClosingResearch Track at Meeting Room 103
Chair(s): Kiev Gama UFPE
15:45
20m
Talk
Understanding information diffusion about open-source projects on Twitter, HackerNews, and RedditFull Paper
Research Track
Hongbo Fang Carnegie Mellon University, Bogdan Vasilescu Carnegie Mellon University, James Herbsleb Carnegie Mellon University
Pre-print
16:05
20m
Talk
Towards Understanding the Open Source Interest in Gender-Related GitHub ProjectsFull Paper
Research Track
Rita Garcia Unity and Victoria University of Wellington, Christoph Treude University of Melbourne, Wendy La University of Adelaide
Pre-print
16:25
20m
Talk
Hearing the voice of experts: Unveiling Stack Exchange communities’ knowledge of test smellsFull Paper
Research Track
Luana Martins Federal University of Bahia, Denivan Campos University of Molise, Italy, Railana Santana Federal University of Bahia, Joselito Mota Jr Federal University of Bahia, Heitor Augustus Xavier Costa Federal University of Lavras, Ivan Machado Federal University of Bahia
Pre-print
17:05
20m
Talk
Strategies for Using Websites to Support Programming and Their Impact on Source CodeFull Paper
Research Track
Omar Alghamdi Department of Computer Science, University of Manchester, M13 9PL, United Kingdom. College of Computing and Informatics, Saudi Electronic University, Riyadh,6867, Saudi Arabia, Sarah Clinch Department of Computer Science, University of Manchester, M13 9PL, United Kingdom, Mohammad Alhamadi Department of Computer Science, University of Manchester, M13 9PL, United Kingdom, Caroline Jay Department of Computer Science, University of Manchester, M13 9PL, United Kingdom
Pre-print
17:25
5m
Day closing
First day closing
Research Track
Igor Steinmacher Northern Arizona University

Mon 15 May

Displayed time zone: Hobart change

09:00 - 10:30
Second day opening / KeynoteResearch Track at Meeting Room 103
Chair(s): Hourieh Khalajzadeh Deakin University, Australia
09:15
15m
Day opening
Second day opening
Research Track
Igor Steinmacher Northern Arizona University
09:30
60m
Keynote
Humans of AI
Research Track
Jon Whittle CSIRO's Data61 and Monash University
11:00 - 12:30
Collaboration & Human Factors IIResearch Track at Meeting Room 103
Chair(s): Andrew Begel Carnegie Mellon University
11:00
20m
Talk
Developers Need Protection, Too: Perspectives and Research Challenges for Privacy in Social Coding PlatformsNIER paper
Research Track
Nicolás E. Díaz Ferreyra Hamburg University of Technology, Abdessamad Imine Lorraine University, Melina Vidoni Australian National University, Riccardo Scandariato Hamburg University of Technology
Pre-print
11:20
20m
Talk
Emotions in Requirements Engineering: A Systematic Mapping StudyFull Paper
Research Track
Tahira Iqbal University of Tartu, Hina Anwar University of Tartu, Syazwanie Filzah University of Tartu, Mohamad Gharib University of Tartu, Kerli Mooses University of Tartu, Kuldar Taveter University of Tartu, Estonia
Pre-print
11:40
20m
Talk
Addressing Age-Related Accessibility Needs of Senior Users Though Model-Driven EngineeringNIER paper
Research Track
Shavindra Wickramathilaka Monash University, ingo Mueller Monash University
Pre-print
12:00
20m
Talk
Perceptions of Task Interdependence in Software Development: An Industrial Case StudyFull Paper
Research Track
Mayara Benício de Barros Souza Federal University of Pernambuco and UNIVASF, Fabio Q. B. da Silva Federal University of Pernambuco, Carolyn Seaman University of Maryland Baltimore County
Pre-print
13:45 - 15:15
Diversity and Inclusion in Software EngineeringResearch Track at Meeting Room 103
13:45
20m
Talk
Investigating the Perceived Impact of Maternity on Software Engineering: a Women’s PerspectiveFull Paper
Research Track
Larissa Soares Universidade Federal da Bahia, Edna Dias Canedo University of Brasilia (UnB), Claudia Pinto Pereira State University of Feira de Santana, Carla Bezerra Federal University of Ceará, Fabiana Freitas Mendes University of Brasilia (UnB)
Pre-print
14:05
20m
Talk
The State of Diversity and Inclusion in Apache: A Pulse CheckFull Paper
Research Track
Zixuan Feng Oregon State University, Mariam Guizani Oregon State University, Marco Gerosa Northern Arizona University, Anita Sarma Oregon State University
Pre-print
14:25
20m
Talk
Diversity in Software Engineering: A Survey about Computer Scientists from Underrepresented GroupsNIER paper
Research Track
Ronnie de Souza Santos Cape Breton University, Brody Stuart-Verner Cape Breton University, Cleyton V. C. de Magalhães Recife Center for Advanced Studies and Systems (CESAR)
Pre-print
14:45
20m
Talk
LGBTQIA+ (In)Visibility in Computer Science and Software Engineering EducationNIER paper
Research Track
Ronnie de Souza Santos Cape Breton University, Brody Stuart-Verner Cape Breton University, Cleyton V. C. de Magalhães Recife Center for Advanced Studies and Systems (CESAR)
Pre-print
15:45 - 17:15
Registered Reports / Conference closingRegistered Reports / Research Track at Meeting Room 103
Chair(s): Maria Teresa Baldassarre Department of Computer Science, University of Bari
15:45
15m
Talk
Deconstructing Sentimental Stack Overflow Posts Through Interviews: Exploring the Case of Software TestingRegistered Report
Registered Reports
Mark Swillus TU Delft, Andy Zaidman Delft University of Technology
Pre-print
16:00
15m
Talk
A Perspective on the Role of Human Behaviors in Software Development: Voice and SilenceRegistered Report
Registered Reports
Mary Sánchez-Gordón Østfold University College, Ricardo Colomo-Palacios Universidad Politécnica de Madrid, Muhammad Azeem Akbar LUT University, Monica Kristiansen Holone Østfold University College
Link to publication DOI Pre-print
16:15
15m
Talk
A Network Perspective on the Influence of Code Review Bots on the Structure of Developer CollaborationsRegistered Report
Registered Reports
Leonore Röseler Department of Informatics, University of Zurich, Ingo Scholtes Chair of Computer Science XV - Machine Learning for Complex Networks, Julius-Maximilians-Universität Würzburg, Christoph Gote Chair of Systems Design, ETH Zurich
Pre-print
16:30
30m
Panel
Discussion and feedback on registered protocols
Registered Reports
S: Raula Gaikovina Kula Nara Institute of Science and Technology, S: Marcos Kalinowski Pontifical Catholic University of Rio de Janeiro (PUC-Rio), S: Helen Sharp The Open University, S: Rashina Hoda Monash University
17:00
15m
Day closing
Conference closing & Awards ceremony
Research Track
Igor Steinmacher Northern Arizona University

Please, contact the CHASE’23 RR track chairs with any questions, feedback, or requests for clarification. Specific analysis approaches mentioned below are intended as examples, not mandatory components.

I. Title (required)

Provide the working title of your study. It may be the same title that you submit for publication of your final manuscript, but it is not mandatory

Example: Should your family travel with you on the enterprise? Subtitle (optional): Effect of accompanying families on the work habits of crew members

II. Authors (required)

At this stage, we believe that a single anonymous review is most productive

III. Structured Abstract (required)

The abstract should describe the following in 200 words or so:

  • Background/Context
    What is your research about? Why are you doing this research, why is it interesting?

    Example: “The enterprise is the flag ship of the federation, and it allows families to travel onboard. However, there are no studies that evaluate how this affects the crew members.”

  • Objective/Aim
    What exactly are you studying/investigating/evaluating? What are the objects of the study? We welcome both confirmatory and exploratory types of studies.

    Example (Confirmatory): We evaluate whether the frequency of sick days, the work effectiveness and efficiency differ between science officers who bring their family with them, compared to science officers who are serving without their family.
    Example (Exploratory): We investigate the problem of frequent Holodeck use on interpersonal relationships with an ethnographic study using participant observation, in order to derive specific hypotheses about Holodeck usage.

  • Method
    How are you addressing your objective? What data sources are you using?

    Example: We conduct an observational study and use a between subject design. To analyze the data, we use a t-test or Wilcoxon test, depending on the underlying distribution. Our data comes from computer monitoring of Enterprise crew members.

IV. Introduction

Give more details on the bigger picture of your study and how it contributes to this bigger picture. An important component of phase 1 review is assessing the importance and relevance of the study questions, so be sure to explain this.

V. Hypotheses (required for confirmatory study) or research questions

Clearly state the research hypotheses that you want to test with your study, and a rationalization for the hypotheses.

Example:

  • Hypothesis: Science officers with their family on board have more sick days than science officers without their family.
  • Rationale: Since toddlers are often sick, we can expect that crew members with their family onboard need to take sick days more often.

VI. Variables (required for confirmatory study)

  • Independent Variable(s) and their operationalization
  • Dependent Variable(s) and their operationalization (e.g., time to solve a specified task)
  • Confounding Variable(s) and how their effect will be controlled (e.g., species type (Vulcan, Human, Tribble) might be a confounding factor; we control for it by separating our sample additionally into Human/Non-Human and using an ANOVA (normal distribution) or Friedman (non-normal distribution) to distill its effect).

For each variable, you should give: – name (e.g., presence of family) – abbreviation (if you intend to use one) – description (whether the family of the crew members travels on board) – scale type (nominal: either the family is present or not) – operationalization (crew members without family on board vs. crew members with family onboard)

VII. Participants/Subjects/Datasets (required)

Describe how and why you select the sample. When you conduct a meta-analysis, describe the primary studies / work on which you base your meta-analysis.

Example: We recruit crew members from the science department on a voluntary basis. They are our targeted population.

VIII. Execution Plan (required)

Describe the experimental setting and procedure. This includes the methods/tools that you plan to use (be specific on whether you developed it (and how) or whether it is already defined), and the concrete steps that you plan to take to support/reject the hypotheses or answer the research questions.

Example: Each crew member needs to sign the informed consent and agreement to process their data according to GDPR. Then, we conduct the interviews. Afterwards, participants need to complete the simulated task.

Examples:

Confirmatory:
https://osf.io/5fptj/ – Do Explicit Review Strategies Improve Code Review Performance?

Exploratory:
https://osf.io/kfu9t – The Impact of Dynamics of Collaborative Software Engineering on Introverts: A Study Protocol
https://osf.io/acnwk – Large-Scale Manual Validation of Bugfixing Changes

Questions? Use the CHASE Registered Reports contact form.