Explainable Human-Machine Teaming using Model Checking and Interpretable Machine Learning
The human-machine teaming paradigm promotes tight teamwork between humans and autonomous machines that collaborate in the same physical space. This paradigm is increasingly widespread in critical domains, such as healthcare and domestic assistance. These systems are expected to build a certain level of trust by enforcing dependability and exhibiting interpretable behavior. However, trustworthiness is negatively affected by the black-box nature of these systems, which typically make fully autonomous decisions that may be confusing for humans or cause hazards in critical domains. We present the EASE approach, whose goal is to build better trust in human-machine teaming leveraging statistical model checking and model-agnostic interpretable machine learning techniques. We illustrate EASE through an example in the healthcare domain featuring an infinite (dense) space of human-machine uncertain factors, such as diverse physical and physiological characteristics of the agents involved in the teamwork. Our empirical evaluation demonstrates the suitability and cost-effectiveness of EASE in explaining dependability properties in human-machine teaming.
Sun 14 MayDisplayed time zone: Hobart change
11:00 - 12:30
|Goal Controller Synthesis for Self-Adaptive Systems|
|Verifying Binary Neural Networks on Continuous Input Space using Star Reachability|
|Explainable Human-Machine Teaming using Model Checking and Interpretable Machine Learning|