Lightweight Approaches to DNN Regression Error Reduction: An Uncertainty Alignment Perspective
Regression errors of Deep Neural Network (DNN) models refer to the case that predictions were correct by the old-version model but wrong by the new-version model. They frequently occur when upgrading DNN models in production systems, causing disproportionate user experience degradation. In this paper, we propose a lightweight regression error reduction approach with two goals: 1) without requiring model retraining and even data, and 2) without sacrificing the accuracy. The proposed approach is built upon the key insight rooted in the unmanaged model uncertainty, which is intrinsic to DNN models but not well explored, especially in the context of quality assurance of DNN models. Specifically, we propose a simple yet effective ensemble strategy that estimates and aligns the two models’ uncertainty in each prediction. We show that a Pareto improvement that reduces the regression errors without compromising the overall accuracy can be both guaranteed in theory and largely achieved in practice. Comprehensive experiments with various representative models and datasets also confirm that our approaches significantly outperform the state-of-the-art alternatives.