A Combined Approach to Performance Regression Testing Resource Usage Reduction
Performance regression testing is often seen as a natural part of the continuous integration pipeline. The underpinning layers, such as just-in-time compilation, memory mapping, and operating system characteristics, often influence performance measurement samples. To reduce such non-deterministic factors, the usual practice includes restarting the measured workload, performing warmups, and controlling environmental variability. These need to be parameterized, among others, by run count, warm-up iterations, and iteration count. Importantly, performance testing that detects performance regressions of any scale is computationally expensive due to the need to collect samples that can detect performance changes with statistical significance.
To reduce the costs of performance testing, different methods for code analysis and experiment parameterization can be used. In this work, we address the challenge of identifying the optimal parameters for performance testing. Especially in environments that use just-in-time compilation, determining the required run count is non-trivial. The run count needed depends on the workload and non-deterministic factors.
To address these challenges, we have developed an approach that combines several methods for parameter selection in performance testing automation. We created a simulation where these methods work together interactively, providing a dynamic environment to evaluate their effectiveness.
We evaluated three controller methods on a public dataset from the GraalVM compiler. Based on our evaluation, find that the Peass method is most efficient if the change effect size of the training set mirrors the change effect size of the test set, and that the Mutations method has constant accuracy regardless of the training set data.
Thu 26 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
16:00 - 18:00 | |||
16:00 15mTalk | Leveraging LLM Enhanced Commit Messages to Improve Machine Learning Based Test Case Prioritization PROMISE 2025 Yara Q Mahmoud Ontario Tech University, Akramul Azim Ontario Tech University, Ramiro Liscano Ontario Tech University, Kevin Smith International Business Machines Corporation (IBM), Yee-Kang Chang International Business Machines Corporation (IBM), Gkerta Seferi International Business Machines Corporation (IBM), Qasim Tauseef International Business Machines Corporation (IBM) | ||
16:16 14mTalk | Designing and Optimizing Alignment Datasets for IoT Security: A Synergistic Approach with Static Analysis Insights PROMISE 2025 | ||
16:31 14mTalk | Efficient Adaptation of Large Language Models for Smart Contract Vulnerability Detection PROMISE 2025 Fadul Sikder Department of Computer Science and Engineering, The University of Texas at Arlington, Jeff Yu Lei University of Texas at Arlington, Yuede Ji Department of Computer Science and Engineering, The University of Texas at Arlington | ||
16:46 14mTalk | A Combined Approach to Performance Regression Testing Resource Usage Reduction PROMISE 2025 Milad Abdullah Charles University, David Georg Reichelt Lancaster University Leipzig, Leipzig, Germany, Vojtech Horky Charles University, Lubomír Bulej Charles University, Tomas Bures Charles University, Czech Republic, Petr Tuma Charles University | ||
17:01 14mTalk | Security Bug Report Prediction Within and Across Projects: A Comparative Study of BERT and Random Forest PROMISE 2025 Farnaz Soltaniani TU Clausthal, Mohammad Ghafari TU Clausthal, Mohammed Sayagh ETS Montreal, University of Quebec | ||
17:16 9mTalk | Towards Build Optimization Using Digital Twins PROMISE 2025 Henri Aïdasso École de technologie supérieure (ÉTS), Francis Bordeleau École de Technologie Supérieure (ETS), Ali Tizghadam TELUS | ||
17:26 4mDay closing | Closing PROMISE 2025 |