Mastering Uncertainty in Performance Estimations of Configurable Software Systems
Understanding the influence of configuration options on performance is key for finding an optimal system configuration, system understanding, and performance debugging. In prior research, a number of performance influence modeling approaches have been proposed, which all assign scalar values to option influences and model predictions. However, these point estimates falsely imply a certainty regarding an option’s influence that neglects several sources of uncertainty within the assessment process, such as (1) measurement bias, (2) model representation and learning process, and (3) incomplete data. This leads to the situation that different approaches assign different scalar performance values to options and interactions among them. The true influence is uncertain, though, there is no way to even quantify this uncertainty with state-of-the-art performance modeling approaches.
We propose a novel approach based on probabilistic programming that explicitly models uncertainty for option influences and consequently provides a confidence interval for each prediction alongside a scalar. This way, we can explain, for the first time, why predictions may cause errors and which option’s influences may be unreliable. Our evaluation on 10 real-world subject systems shows that with our implementation, P4, we yield errors that match the state of the art when considering only the scalar component of the prediction, and we achieve competitive accuracies while providing reliable confidence intervals.
Wed 23 Sep Times are displayed in time zone: (UTC) Coordinated Universal Time change
|16:00 - 16:20|
|16:20 - 16:40|
Johannes DornLeipzig University, Sven ApelSaarland University, Germany, Norbert SiegmundLeipzig UniversityDOI Pre-print
|16:40 - 17:00|