SANER 2024
Tue 12 - Fri 15 March 2024 Rovaniemi , Finland
Dates
Wed 13 Mar 2024
Thu 14 Mar 2024
Fri 15 Mar 2024
Tracks
SANER Early Research Achievement (ERA) Track
SANER Industrial Track
SANER Journal First Track
SANER Reproducibility Studies and Negative Results (RENE) Track
SANER Research Papers
SANER Short Papers and Posters Track
SANER Tools Demo Track
You're viewing the program in a time zone which is different from your device's time zone change time zone

Wed 13 Mar

Displayed time zone: Athens change

14:00 - 15:30
API and Dependency AnalysisResearch Papers / Reproducibility Studies and Negative Results (RENE) Track at LAPPI
Chair(s): Martin Monperrus KTH Royal Institute of Technology
14:00
15m
Talk
The Limits of the Identifiable: Challenges in Python Version Identification with Deep Learning
Reproducibility Studies and Negative Results (RENE) Track
Marcus Gerhold University of Twente, The Netherlands, Lola Solovyeva University of Twente, Vadim Zaytsev University of Twente, Netherlands
Pre-print

Thu 14 Mar

Displayed time zone: Athens change

11:00 - 12:30
12:06
15m
Talk
Assessing the Security of GitHub Copilot’s Generated Code - A Targeted Replication Study
Reproducibility Studies and Negative Results (RENE) Track
Vahid Majdinasab Polytechnique Montréal, Michael Joshua Bishop Massey University, Shawn Rasheed Universal College of Learning, Arghavan Moradi Dakhel Polytechnique Montreal, Amjed Tahir Massey University, Foutse Khomh Polytechnique Montréal
14:00 - 15:30
Defect Prediction and Analysis IIResearch Papers / Industrial Track / Reproducibility Studies and Negative Results (RENE) Track at KURU
Chair(s): Masud Rahman Dalhousie University
14:15
15m
Talk
On The Effectiveness of One-Class Support Vector Machine in Different Defect Prediction Scenarios
Reproducibility Studies and Negative Results (RENE) Track
Rebecca Moussa University College London, Danielle Azar Lebanese American University, Federica Sarro University College London
14:30
15m
Talk
On the Stability and Applicability of Deep Learning in Fault Localization
Reproducibility Studies and Negative Results (RENE) Track
Viktor Csuvik Department of Software Engineering, MTA-SZTE Research Group on Artificial Intelligence, University of Szeged, Szeged, Hungary, Roland Aszmann University of Szeged, Department of Software Engineering, Árpád Beszédes Department of Software Engineering, University of Szeged, Ferenc Horv�th University of Szeged, Department of Software Engineering, Tibor Gyimóthy University of Szeged, Hungary

Fri 15 Mar

Displayed time zone: Athens change

09:00 - 10:30
Program ComprehensionJournal First Track / Research Papers / Reproducibility Studies and Negative Results (RENE) Track at LAPPI
Chair(s): Kim Mens Université catholique de Louvain, ICTEAM institute, Belgium
09:00
15m
Talk
List Comprehension Versus for Loops Performance in Real Python Projects: Should we Care?
Reproducibility Studies and Negative Results (RENE) Track
Cyrine Zid École Polytechnique de Montréal, François Belias Ecole Polytechnique de Montréal, Massimiliano Di Penta University of Sannio, Italy, Foutse Khomh Polytechnique Montréal, Giulio Antoniol Ecole Polytechnique de Montreal
11:00 - 12:30
11:44
15m
Talk
Sentiment of Technical Debt Security Questions on Stack Overflow: A Replication Study
Reproducibility Studies and Negative Results (RENE) Track
Jarl Jansen Eindhoven University of Technology, Nathan Cassee Eindhoven University of Technology, Alexander Serebrenik Eindhoven University of Technology
16:00 - 17:00
Managing Workflows and PeopleReproducibility Studies and Negative Results (RENE) Track / Industrial Track at KURU
Chair(s): Ipek Ozkaya Carnegie Mellon University
16:30
15m
Talk
Agile Effort Estimation: Have We Solved the Problem Yet? Insights From A Second Replication Study (GPT2SP Replication Report)
Reproducibility Studies and Negative Results (RENE) Track
Vali Tawosi J.P. Morgan AI Research, Rebecca Moussa University College London, Federica Sarro University College London

Accepted Papers

Title
Agile Effort Estimation: Have We Solved the Problem Yet? Insights From A Second Replication Study (GPT2SP Replication Report)
Reproducibility Studies and Negative Results (RENE) Track
Assessing the Security of GitHub Copilot’s Generated Code - A Targeted Replication Study
Reproducibility Studies and Negative Results (RENE) Track
List Comprehension Versus for Loops Performance in Real Python Projects: Should we Care?
Reproducibility Studies and Negative Results (RENE) Track
On The Effectiveness of One-Class Support Vector Machine in Different Defect Prediction Scenarios
Reproducibility Studies and Negative Results (RENE) Track
On the Stability and Applicability of Deep Learning in Fault Localization
Reproducibility Studies and Negative Results (RENE) Track
Sentiment of Technical Debt Security Questions on Stack Overflow: A Replication Study
Reproducibility Studies and Negative Results (RENE) Track
The Limits of the Identifiable: Challenges in Python Version Identification with Deep Learning
Reproducibility Studies and Negative Results (RENE) Track
Pre-print

Call For Paper

The 31st edition of the International Conference on Software Analysis, Evolution, and Reengineering (SANER’24) would like to encourage researchers to (1) reproduce results from previous papers and (2) publish studies with important and relevant negative or null results (results which fail to show an effect yet demonstrate the research paths that did not pay off). We would also like to encourage the publication of the negative results or reproducible aspects of previously published work (in the spirit of journal-first submissions). This previously published work includes accepted submissions for the 2024 SANER main track.

Reproducibility studies. The papers in this category must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. Such submissions should at least apply the approach to new data sets (open-source or proprietary). Particularly, reproducibility studies are encouraged to target techniques that previously were evaluated only on proprietary or open-source systems. A reproducibility study should report on results that the authors could reproduce and on the aspects of the work that were irreproducible. We encourage reproducibility studies to follow the ACM guidelines on reproducibility (different team, different experimental setup): “The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts, which they develop completely independently.”

Negative results papers. We seek papers that report on negative results. We seek negative results for all types of software engineering research in any empirical area (qualitative, quantitative, case study, experiment, among others). For example, did your controlled experiment on the value of dual monitors in pair programming not show an improvement over a single monitor? Even if negative, results are still valuable when they are not obvious or disprove widely accepted wisdom. As Walter Tichy writes, “Negative results, if trustworthy, are extremely important for narrowing down the search space. They eliminate useless hypotheses and thus reorient and speed up the search for better approaches.”

Evaluation Criteria

Both Reproducibility Studies and Negative Results submissions will be evaluated according to the following standards:

-Depth and breadth of the empirical studies

-Clarity of writing

-Appropriateness of conclusions

-Amount of useful, actionable insights

-Availability of instruments and complementary artifacts

-Underlined methodological rigor. For example, a negative result due primarily to misaligned expectations or a lack of statistical power (small samples) is not a good submission.

However, the negative result should result from a lack of effect, not a lack of methodological rigor. Most importantly, we expect reproducibility studies to point out the instruments and artifacts the study is built upon clearly, and to provide the links to all the artifacts in the submission (the only exception will be given to those papers that reproduce the results on proprietary datasets that cannot be publicly released).

Submission Instructions

Submissions must be original. It means that the findings and writing have not been previously published or under consideration elsewhere. However, as either reproducibility studies or negative results, some overlap with previous work is expected. Please make that clear in the paper. The publication format should follow the SANER guidelines. Choose “RENE: Replication” or “RENE: NegativeResult” as the submission type.

Length: There are two formats. Appendices to conference submissions or previous work by the authors can be described in four pages. New reproducibility studies and new descriptions of negative results will have ten pages.

Important note: the RENE track of SANER 2024 DOES NOT FOLLOW a full double-blind review process.

Important Dates

-Abstract submission deadline: November 10, 2023, AoE

-Paper submission deadline: November 17, 2023, AoE

-Notifications: December 20, 2023, AoE

-Camera Ready: January 19, 2024, AoE

:
: