Understanding Practitioners’ Perspectives on Monitoring Machine Learning Systems
Given the inherent non-deterministic nature of machine learning (ML) systems, their behavior in production environments can lead to unforeseen and potentially dangerous outcomes. For a timely detection of unwanted behavior and to prevent organizations from financial and reputational damage, monitoring these systems is essential. This paper explores the strategies, challenges, and improvement opportunities for monitoring ML systems from the practitioners’ perspective. We conducted a global survey of 91 ML practitioners to collect diverse insights into current ML monitoring practices. We aim to complement existing research through our qualitative and quantitative analyses, focusing on prevalent runtime issues, industrial monitoring and mitigation practices, key challenges, and desired enhancements in future monitoring tools. Our findings reveal that practitioners frequently struggle with runtime issues related to declining prediction quality, exceeding latency, and security violations. While most prefer automated monitoring for the increased efficiency, many still rely on manual approaches due to the complexity or the lack of appropriate automation solutions. Practitioners report that the initial setup and configuration of monitoring tools is often complicated and challenging, particularly when integrating with ML systems and setting alert thresholds. Moreover, practitioners find that monitoring adds extra workload, strains resources, and causes alert fatigue. The desired improvements from the practitioners’ perspective are: improved support for performance and fairness monitoring, recommendations for resolving runtime issues, and automated generation and deployment of monitors. These insights offer valuable guidance for the future development of ML monitoring tools that are better aligned with practitioners’ needs.