Using Drift Planning to Improve Safety of Visual Navigation in Unmanned Aerial Vehicles
Machine learning (ML) models can provide powerful solutions to many types of problems but are susceptible to changes in the data distribution observed during operations, known as data drift. Unexpected inputs due to data drift or out of distribution (OOD) data can lead to undesirable behavior. Even worse, ML-enabled systems often do not expose to users when the underlying model performance has degraded, which can lead to systems performing in unexpected and undesirable ways. To provide insight into how model performance is impacted by data drift, we developed Portend, a tool set that enables model developers to generate and tune monitors that can detect when data drift results in reduced confidence in the model, and send alerts when confidence does not meet system needs. The monitor is tuned by artificially inducing drift in test data, and computing appropriate metrics to estimate how model performance responds to different conditions of data drift. Based on simulations, Portend can be used to set thresholds for appropriate metrics to predict when data drift is occurring and is expected to reduce confidence in the model output below a confidence threshold set for a system. We validate our approach using a metric based on Average Threshold Confidence (ATC) to simulate drift detection on an autonomous drone system running a neural-network-based visual localization model. We show that Portend can be used to provide drift planning support to tune a monitor and detect when data drift has degraded model performance below a threshold in the context of visual localization for autonomous navigation in unmanned aerial vehicles. Results show that Portend can effectively detect data drift, which allows users to observe model performance in operation, enables taking corrective action, and improves trust in the system.