Reproducibility Studies and Negative Results (RENE) Track SANER 2025
Accepted Papers
Title | |
---|---|
Does the Tool Matter? Exploring Some Causes of Threats to Validity in Mining Software Repositories Reproducibility Studies and Negative Results (RENE) Track | |
Exploring the Relationship between Technical Debt and Lead Time: An Industrial Case Study Reproducibility Studies and Negative Results (RENE) Track | |
Hidden Figures in Software Engineering: A Replication Study Exploring Undergraduate Software Students’ Awareness of Distinguished Scientists from Underrepresented Groups Reproducibility Studies and Negative Results (RENE) Track | |
Revisiting Method-Level Change Prediction: Comparative Evaluation at Different Granularities Reproducibility Studies and Negative Results (RENE) Track | |
Revisiting the Non-Determinism of Code Generation by the GPT-3.5 Large Language Model Reproducibility Studies and Negative Results (RENE) Track |
Call For Papers
The 32nd edition of the International Conference on Software Analysis, Evolution, and Reengineering (SANER 2025) would like to encourage researchers to (1) reproduce results from previous papers and (2) publish studies with important and relevant negative or null results (results which fail to show an effect yet demonstrate the research paths that did not pay off). We would also like to encourage the publication of the negative results or reproducible aspects of previously published work (in the spirit of journal-first submissions). This previously published work includes accepted submissions for the 2025 SANER main track.
Reproducibility studies. The papers in this category must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. Such submissions should at least apply the approach to new data sets (open-source or proprietary). Particularly, reproducibility studies are encouraged to target techniques that previously were evaluated only on proprietary or open-source systems. A reproducibility study should report on results that the authors could reproduce and on the aspects of the work that were irreproducible. We encourage reproducibility studies to follow the ACM guidelines on reproducibility (different team, different experimental setup): “The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts, which they develop completely independently.”
Negative results papers. We seek papers that report on negative results. We seek negative results for all types of software engineering research in any empirical area (qualitative, quantitative, case study, experiment, among others). For example, did your controlled experiment on the value of dual monitors in pair programming not show an improvement over a single monitor? Even if negative, results are still valuable when they are not obvious or disprove widely accepted wisdom. As Walter Tichy writes, “Negative results, if trustworthy, are extremely important for narrowing down the search space. They eliminate useless hypotheses and thus reorient and speed up the search for better approaches.”
Evaluation Criteria
Both Reproducibility Studies and Negative Results submissions will be evaluated according to the following standards:
-
Depth and breadth of the empirical studies
-
Clarity of writing
-
Appropriateness of conclusions
-
Amount of useful, actionable insights
-
Availability of instruments and complementary artifacts
-
Underlined methodological rigor. For example, a negative result due primarily to misaligned expectations or a lack of statistical power (small samples) is not a good submission.
However, the negative result should result from a lack of effect, not a lack of methodological rigor. Most importantly, we expect reproducibility studies to point out the instruments and artifacts the study is built upon clearly, and to provide the links to all the artifacts in the submission (the only exception will be given to those papers that reproduce the results on proprietary datasets that cannot be publicly released).
Submission Instructions
Submissions must be original. It means that the findings and writing have not been previously published or under consideration elsewhere. However, as either reproducibility studies or negative results, some overlap with previous work is expected. Please make that clear in the paper. The publication format should follow the SANER guidelines. Choose “RENE: Replication” or “RENE: NegativeResult” as the submission type.
Submissions will be reviewed following a double-anonymous reviewing process (author names and affiliations must be omitted). Please see the Double-Anonymous instructions on the SANER 2025 Research Papers track page. The EasyChair link for all SANER 2025 tracks is https://easychair.org/conferences/?conf=saner2025.
To submit your paper, please use the same submission link. After clicking on “Make a New Submission,” you will be presented with a list of all available tracks. Be sure to select the correct track (e.g., Short Papers and Posters Track), as illustrated in the attached screenshot.
Submission Formats
There are two formats. Appendices to conference submissions or previous work by the authors can be described in 5 pages (including all text, figures, references, and appendices). New reproducibility studies and new descriptions of negative results must not exceed 10 pages (including figures and appendices) plus up to 2 pages that contain ONLY references.
Important Dates
-
Abstract submission deadline: November 7, 2024, AoE
-
Paper submission deadline: November 12, 2024, AoE
-
Notifications: December 20, 2024, AoE
-
Camera Ready: January 24, 2025, AoE