Improving the Reliability of Failure Prediction Models through Concept Drift Monitoring
Failure prediction models can be significantly beneficial in managing large-scale complex software systems, but their trustworthiness is severely affected by changes in the data over time, also known as concept drift. Thus, monitoring these models against concept drift and retraining them when data has changed becomes a crucial step in designing reliable failure prediction models. In this work, we assess the effects of monitoring failure prediction models over time using label-independent (unsupervised) drift detectors. We show that retraining based on unsupervised drift detectors instead of periodically reduces the cost of acquiring true labels without compromising accuracy. Furthermore, we propose a novel feature reduction for unsupervised drift detectors and an evaluation pipeline that practitioners can employ to select the most suitable unsupervised drift detector for their application.
Sat 3 MayDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | |||
11:00 30mTalk | Lachesis: Predicting LLM Inference Accuracy using Structural Properties of Reasoning Paths DeepTest Naryeong Kim Korea Advanced Institute of Science and Technology, Sungmin Kang KAIST, Gabin An KAIST, Shin Yoo KAIST Pre-print | ||
11:30 30mTalk | Improving the Reliability of Failure Prediction Models through Concept Drift Monitoring DeepTest Lorena Poenaru-Olaru TU Delft, Luís Cruz TU Delft, Jan S. Rellermeyer Leibniz University Hannover, Arie van Deursen TU Delft | ||
12:00 30mTalk | On the Effectiveness of LLMs for Manual Test Verifications DeepTest Myron David Peixoto Federal University of Alagoas, Davy Baía Federal University of Alagoas, Nathalia Nascimento Pennsylvania State University, Paulo Alencar University of Waterloo, Baldoino Fonseca Federal University of Alagoas, Márcio Ribeiro Federal University of Alagoas, Brazil |