An Empirical Investigation into Maintenance of Load Testing Scripts
Modern software systems are expected to deliver high performance under a variety of different workloads. In order to automatically verify whether a system operates correctly under specific load conditions, load testing has become a widely adopted technique. As software systems evolve, their load requirements—such as performance thresholds and usage patterns—also change, necessitating updates to load tests.
In this study, we investigate the maintenance of load testing scripts to better understand how load requirements evolve and how these changes are reflected in the tests themselves. Specifically, we analyze 35 open-source software (OSS) repositories that incorporate load testing, examining the frequency and nature of load test updates. Our analysis reveals that 45.7% of the studied projects do not update their load testing scripts after initial creation. However, a small subset of projects demonstrates extensive and ongoing maintenance of these scripts. Furthermore, we identified 20 distinct update types across 5 major categories of purposes for load testing script modifications. The most frequent update type is related to “Test Maintenance”, followed by “Test Scenario Modification.”
Fri 3 OctDisplayed time zone: Hawaii change
14:00 - 15:20 | Software TestingESEM - Emerging Results and Vision Track / ESEM - Journal First Track / ESEM - Technical Track / at Kaiulani II Chair(s): Márcio Ribeiro Federal University of Alagoas, Brazil | ||
14:00 16mTalk | An Empirical Investigation into Maintenance of Load Testing Scripts ESEM - Emerging Results and Vision Track Ibuki Nakamura Nara Institute of Science and Technology, Kosei Horikawa Nara Institute of Science and Technology, Brittany Reid Nara Institute of Science and Technology, Yutaro Kashiwa Nara Institute of Science and Technology, Hajimu Iida Nara Institute of Science and Technology | ||
14:16 16mTalk | A Vision for Debiasing Confirmation Bias in Software Testing via LLM ESEM - Emerging Results and Vision Track Iflaah Salman Lappeenranta-Lahti University of Technology (LUT), Muhammad Waseem Faculty of Information Technology and Communication Sciences, Tampere University, 33014 Tampere, Finland, Vladimir Mandić Faculty of Technical Sciences, University of Novi Sad, Rasanjana Dhanushkha De Alwis Lappeenranta-Lahti University of Technology LUT | ||
14:32 16mTalk | Comparing effectiveness and efficiency of interactive application security testing (IAST) and runtime application self-protection (RASP) tools in a large java-based system ESEM - Journal First Track Aishwwarya Seth Microsoft, Saikath Bhattacharya Illinois State University, Sarah Elder UNC-Wilmington, Nusrat Zahan North Carolina State University, Laurie Williams North Carolina State University | ||
14:48 16mTalk | Is Diversity a Meaningful Metric in Fairness Testing? ESEM - Technical Track | ||
15:04 16mTalk | Where Tests Fall Short: Empirically Analyzing Oracle Gaps in Covered Code ESEM - Technical Track Megan Maton University of Sheffield, Gregory Kapfhammer Allegheny College, Phil McMinn University of Sheffield | ||