Analyzing the Impact of Workloads on Modeling the Performance of Configurable Software Systems
Context: Modern software systems often exhibit configuration options to meet customer requirements, including a system’s performance behavior. Performance models derived via machine learning are an established approach for estimating and optimizing configuration-dependent software performance.
Problem: Most existing approaches in this area rely on software performance measurements conducted with a single workload (i.e., input fed to a system). This single workload, however, is often not representative of a software system’s real-world application scenarios. Understanding to what extent configuration and workload—individually and combined—cause a software system’s performance to vary is key to understand whether performance models are generalizable across different configurations and workloads. Yet, so far, this aspect has not been \textit{systematically} studied.
Method: To fill this gap, we conducted a systematic empirical study across 25,258 configurations from nine real-world configurable software systems to investigate the effects of workload variation at system-level performance and for individual configuration options. We explore driving causes for workload-configuration interactions by enriching performance observations with option-specific code coverage information.
Results: Our results indicate that workloads can induce substantial performance variation and interact with configuration options, often in \emph{non-monotonous} ways. This limits not only the generalizability of singular-workload approaches, but also questions assumptions for existing transfer learning techniques. We demonstrate that workloads should be considered when building performance prediction models to maintain and improve representativeness and reliability.
Fri 19 MayDisplayed time zone: Hobart change
13:45 - 15:15 | Software performanceDEMO - Demonstrations / NIER - New Ideas and Emerging Results / Technical Track / SEIP - Software Engineering in Practice at Level G - Plenary Room 1 Chair(s): Philipp Leitner Chalmers University of Technology, Sweden / University of Gothenburg, Sweden | ||
13:45 15mTalk | Analyzing the Impact of Workloads on Modeling the Performance of Configurable Software Systems Technical Track Stefan Mühlbauer Leipzig University, Florian Sattler Saarland Informatics Campus, Saarland University, Christian Kaltenecker Saarland University, Germany, Johannes Dorn Leipzig University, Sven Apel Saarland University, Norbert Siegmund Leipzig University Pre-print | ||
14:00 15mTalk | Twins or False Friends? A Study on Energy Consumption and Performance of Configurable Software Technical Track Max Weber Leipzig University, Christian Kaltenecker Saarland University, Germany, Florian Sattler Saarland Informatics Campus, Saarland University, Sven Apel Saarland University, Norbert Siegmund Leipzig University Link to publication | ||
14:15 15mTalk | Auto-tuning elastic applications in production SEIP - Software Engineering in Practice Adalberto R. Sampaio Jr Huawei Canada, Ivan Beschastnikh University of British Columbia, Daryl Maier IBM Canada, Don Bourne IBM Canada, Vijay Sundaresan IBM Canada | ||
14:30 7mTalk | CryptOpt: Automatic Optimization of Straightline Code DEMO - Demonstrations Joel Kuepper University of Adelaide, Andres Erbsen MIT, Jason Gross MIT CSAIL, Owen Conoly MIT, Chuyue Sun Stanford, Samuel Tian MIT, David Wu University of Adelaide, Adam Chlipala Massachusetts Institute of Technology, Chitchanok Chuengsatiansup University of Adelaide, Daniel Genkin Georgia Tech, Markus Wagner Monash University, Australia, Yuval Yarom Ruhr University Bochum Link to publication | ||
14:37 7mTalk | Performance Analysis with Bayesian Inference NIER - New Ideas and Emerging Results Noric Couderc Lund University, Christoph Reichenbach Lund University, Emma Söderberg Lund University | ||
14:45 15mTalk | Runtime Performance Prediction for Deep Learning Models with Graph Neural Network SEIP - Software Engineering in Practice Yanjie Gao Microsoft Research, Xianyu Gu Tsinghua University, Hongyu Zhang The University of Newcastle, Haoxiang Lin Microsoft Research, Mao Yang Microsoft Research Pre-print | ||
15:00 7mTalk | Judging Adam: Studying the Performance of Optimization Methods on ML4SE Tasks NIER - New Ideas and Emerging Results Dmitry Pasechnyuk Mohammed bin Zayed University of Artificial Intelligence, UAE, Anton Prazdnichnykh , Mikhail Evtikhiev JetBrains Research, Timofey Bryksin JetBrains Research | ||
15:07 7mTalk | Who Ate My Memory? Towards Attribution in Memory Management SEIP - Software Engineering in Practice Gunnar Kudrjavets University of Groningen, Ayushi Rastogi University of Groningen, The Netherlands, Jeff Thomas Meta Platforms, Inc., Nachiappan Nagappan Facebook Pre-print |