Explanation-driven Self-adaptation using Model-agnostic Interpretable Machine LearningFULL
Self-adaptive systems increasingly rely on machine learning techniques such as Neural Networks as black-box models to make decisions and steer adaptations. The lack of transparency of these predictive models makes it hard to explain adaptation decisions and their possible effects on the surrounding environment. Furthermore, adaptation decisions in this context are typically the outcome of expensive optimization processes. The complexity arises from the inability to directly observe or comprehend the internal mechanisms of the black-box predictive models, which requires employing iterative methods to explore a possibly large search space and optimize according to many goals. Here, balancing the trade-off between effectiveness and cost becomes a crucial challenge. In this paper, we propose explanation-driven self-adaptation, a novel approach that embeds model-agnostic interpretable machine learning techniques into the feedback loop to enhance the transparency of the predictive models and gain insights that help drive adaptation decisions effectively by significantly reducing the cost of planning them. Our empirical evaluation demonstrates the cost-effectiveness of our approach using two evaluation subjects in the robotics domain.