ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 28 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:00 - 10:30
Coffee BreakCatering at Morgan's
10:00
30m
Coffee break
Break
Catering

10:30 - 12:00
RENE Track Session[Workshop] RENE at Carr
10:30
10m
Talk
Automatic Generation of Logical Specifications for Behavioural Models
[Workshop] RENE
Radoslaw Klimek AGH University of Krakow, Julia Witek AGH University of Krakow
10:40
20m
Talk
Group Discussion
[Workshop] RENE

11:00
20m
Talk
MorphQ++: A Reproducibility Study of Metamorphic Testing on Quantum Compilers
[Workshop] RENE
Linsey Kitt Iowa State University, Myra Cohen Iowa State University
11:20
20m
Day opening
RENE Track Opening Session
[Workshop] RENE

11:40
20m
Talk
Taxonomy of Security-related Issues in Android Apps: An Empirical Study
[Workshop] RENE
Teerath Das University of Jyväskylä, Adam Ali Mohammad Ali JInnah University, Tommi Mikkonen University of Jyvaskyla
12:00 - 13:30
12:00
90m
Lunch
Lunch
Catering

Call for Papers

The goal of the Replications and Negative Results (RENE) track is to encourage researchers to (1) reproduce results from previous papers and (2) publish studies with important and relevant negative or null results (results which fail to show an effect, yet demonstrate the research paths that did not pay off).

We would also like to encourage the publication of the negative results or reproducible aspects of previously published work. For example, authors of a published paper reporting a working solution for a given problem can document in a “negative results paper” other (failed) attempts they made before defining the working solution they published.

  1. Reproducibility studies. The papers in this category must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. Such submissions should at least apply the approach on new data sets (open-source or proprietary). A reproducibility study should clearly report on results that the authors were able to reproduce as well as on the aspects of the work that were irreproducible. We encourage reproducibility studies to follow the ACM guidelines on reproducibility (different team, different experimental setup): “The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.”

  2. Negative results papers. We seek papers that report on negative results. We seek negative results for all types of software engineering research in any empirical area (qualitative, quantitative, case study, experiment, etc.). For example, did your controlled experiment not show an improvement over the baseline? Even if negative, results obtained are still valuable when they are either not obvious or disprove widely accepted wisdom. Similar to the Insights Workshop at NAACL, strong negative results paper would contribute at least one of the following:

    • Broadly applicable empirical recommendations, especially if X that didn't work is something that many practitioners would think reasonable to try, and if the demonstration of X's failure is accompanied by some explanation/hypothesis;
    • Ablation studies of components in previously proposed models, showing that their contributions are different from what was initially reported (for example, if a paper reported significant improvements as being due to component X, an ablation study that shows the majority of the improvement being due to component Y);
    • Datasets or probing tasks showing that previous approaches do not generalize to other domains or language phenomena;
    • Trivial baselines that work suspiciously well for a given task/dataset;
    • Experiments on (in)stability of the previously published results due to hardware, random initializations, preprocessing pipeline components, etc;
    • demonstration of issues with widely used methodology in the SE literature, such as data collection/preprocessing practices, evaluation metrics (e.g. accuracy, F1), etc. which prevent fair comparison of methods.

Evaluation Criteria

Both Reproducibility Studies and Negative Results submissions will be evaluated according to the following standards:

  • Depth and breadth of the empirical studies
  • Clarity of writing
  • Appropriateness of conclusions
  • Amount of useful, actionable insights
  • Availability of artifacts
  • Underlying methodological rigor. A negative result due primarily to misaligned expectations or due to lack of statistical power (small samples) is not a good submission. The negative result should be a result of a lack of effect, not lack of methodological rigor.

We expect reproducibility studies to clearly point out the artifacts the study is built upon, and to provide the links to all the artifacts in the submission (the only exception will be given to those papers that reproduce the results on proprietary datasets that can not be publicly released).

Submission Instructions

Submissions must be original, in the sense that the findings and writing have not been previously published or under consideration elsewhere. However, as either reproducibility studies or negative results, some overlap with previous work is expected. Please make that clear in the paper.

Publication format should follow the ASE guidelines. Submissions to the RENE Track can be made via the ASE RENE track submission site ( https://ase2024-rene.hotcrp.com) by the submission deadline.

Length: There are two formats. (1) Short Papers and Position Papers, which are limited to 3 pages plus references; (2) Long Papers, which are limited to 6 pages plus references.

Position papers must have a title beginning with "Position Paper:".

Important note: the RENE track does not follow a double-anonymous review process.

The official publication date is the date the proceedings are made available in the ACM or IEEE Digital Libraries. This date may be up to two weeks prior to the first day of ASE 2024. The official publication date affects the deadline for any patent filings related to published work.

Purchases of additional pages in the proceedings is not allowed. Full registration and in-person presentation are required for papers accepted at the conference.

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and will roll out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.