Write a Blog >>
ICSE 2022
Sun 8 - Fri 27 May 2022
Wed 11 May 2022 11:05 - 11:10 at ICSE room 2-odd hours - Performance and Reliability Chair(s): Andrea Zisman
Wed 11 May 2022 22:05 - 22:10 at ICSE room 1-even hours - Requirements and More Chair(s): Cecile Peraire

Performance bugs bear a heavy cost on both software developers and end-users. Tools to reduce the occurrence, impact, and repair time of performance bugs, can therefore provide key assistance for software developers racing to fix these bugs. Classification models that focus on identifying defect-prone commits, referred to as \emph{Just-In-Time (JIT) Quality Assurance} are known to be useful in allowing developers to review risky commits. These commits can be reviewed while they are still fresh in developers’ minds, reducing the costs of developing high-quality software. JIT models, however, leverage the SZZ approach to identify whether or not a change is bug-inducing. The fixes to performance bugs may be scattered across the source code, separated from their bug-inducing locations. The nature of performance bugs may make SZZ a sub-optimal approach for identifying their bug-inducing commits. Yet, prior studies that leverage or evaluate the SZZ approach do not distinguish performance bugs from other bugs, leading to potential bias in the results.

In this paper, we conduct an empirical study on the JIT defect prediction for performance bugs. We concentrate on SZZ’s ability to identify the bug-inducing commits of performance bugs in two open-source projects, Cassandra, and Hadoop. We verify whether the bug-inducing commits found by SZZ are truly bug-inducing commits by manually examining these identified commits. Our manual examination includes cross referencing fix commits and JIRA bug reports. We evaluate model performance for JIT models by using them to identify bug-inducing code commits for performance related bugs. Our findings show that JIT defect prediction classifies non-performance bug-inducing commits better than performance bug-inducing commits, i.e., the SZZ approach does introduce errors when identifying bug-inducing commits. However, we find that manually correcting these errors in the training data only slightly improves the models. In the absence of a large number of correctly labelled performance bug-inducing commits, our findings show that combining all available training data (i.e., truly performance bug-inducing commits, non-performance bug-inducing commits, and non-bug-inducing commits) yields the best classification results.

Wed 11 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:00
Performance and ReliabilityTechnical Track / Journal-First Papers at ICSE room 2-odd hours
Chair(s): Andrea Zisman The Open University
11:00
5m
Talk
Predicting unstable software benchmarks using static source code features
Journal-First Papers
Christoph Laaber Simula Research Laboratory, Mikael Basmaci University of Zurich, Pasquale Salza University of Zurich
Link to publication DOI Media Attached
11:05
5m
Talk
Evaluating the impact of falsely detected performance bug-inducing changes in JIT models
Journal-First Papers
Sophia Quach Concordia University, Maxime Lamothe Polytechnique Montréal, Bram Adams Queens University, Yasutaka Kamei Kyushu University, Weiyi Shang Concordia University
Link to publication DOI Pre-print Media Attached
11:10
5m
Talk
Using Reinforcement Learning for Load Testing of Video Games
Technical Track
Rosalia Tufano Università della Svizzera Italiana, Simone Scalabrino University of Molise, Luca Pascarella Università della Svizzera italiana (USI), Emad Aghajani Software Institute, USI Università della Svizzera italiana, Rocco Oliveto University of Molise, Gabriele Bavota Software Institute, USI Università della Svizzera italiana
Pre-print Media Attached
11:15
5m
Talk
EAGLE: Creating Equivalent Graphs to Test Deep Learning Libraries
Technical Track
Jiannan Wang Purdue University, Thibaud Lutellier University of Waterloo, Shangshu Qian Purdue University, Hung Viet Pham University of Waterloo, Lin Tan Purdue University
Pre-print Media Attached
11:20
5m
Talk
Decomposing Software Verification into Off-the-Shelf Components: An Application to CEGAR
Technical Track
Dirk Beyer LMU Munich, Germany, Jan Haltermann University of Oldenburg, Thomas Lemberger LMU Munich, Heike Wehrheim Carl von Ossietzky Universität Oldenburg / University of Oldenburg
Pre-print Media Attached
11:25
5m
Talk
Precise Divide-By-Zero Detection with Affirmative Evidence
Technical Track
Yiyuan Guo The Hong Kong University of Science and Technology, Ant Group, Jinguo Zhou Ant Group, Peisen Yao The Hong Kong University of Science and Technology, Qingkai Shi Ant Group, Charles Zhang Hong Kong University of Science and Technology
DOI Pre-print Media Attached
22:00 - 23:00
22:00
5m
Talk
Continuously Managing NFRs: Opportunities and Challenges in Practice
Journal-First Papers
Colin Werner University of Victoria, Ze Shi (Zane) Li University of Victoria, Canada, Derek Lowlind University of Victoria, Omar Elazhary University of Victoria, Neil Ernst University of Victoria, Daniela Damian University of Victoria
Link to publication Pre-print Media Attached
22:05
5m
Talk
Evaluating the impact of falsely detected performance bug-inducing changes in JIT models
Journal-First Papers
Sophia Quach Concordia University, Maxime Lamothe Polytechnique Montréal, Bram Adams Queens University, Yasutaka Kamei Kyushu University, Weiyi Shang Concordia University
Link to publication DOI Pre-print Media Attached
22:10
5m
Talk
Issues in the Adoption of the Scaled Agile Framework
SEIP - Software Engineering in Practice
Paolo Ciancarini University of Bologna / Innopolis University, Artem Kruglov Innopolis University, Witold Pedrycz University of Alberta, Dilshat Salikhov Innopolis University, Giancarlo Succi
22:15
5m
Talk
How to Debug Inclusivity Bugs? A Debugging Process with Information Architecture
SEIS - Software Engineering in Society
Mariam Guizani Oregon State University, Igor Steinmacher Northern Arizona University, Jillian Emard Oregon State University, Abrar Fallatah Oregon State University, Margaret Burnett Oregon State University, Anita Sarma Oregon State University
Pre-print Media Attached
22:20
5m
Talk
Generating and Visualizing Trace Link Explanations
Technical Track
Yalin Liu University of Notre Dame, Jinfeng Lin University of Notre Dame, Oghenemaro Anuyah University of Notre Dame, Ronald Metoyer University of Notre Dame, Jane Cleland-Huang University of Notre Dame
Pre-print Media Attached

Information for Participants
Wed 11 May 2022 11:00 - 12:00 at ICSE room 2-odd hours - Performance and Reliability Chair(s): Andrea Zisman
Info for room ICSE room 2-odd hours:

Click here to go to the room on Midspace

Wed 11 May 2022 22:00 - 23:00 at ICSE room 1-even hours - Requirements and More Chair(s): Cecile Peraire
Info for room ICSE room 1-even hours:

Click here to go to the room on Midspace