Exploring the means to measure explainability: Metrics, heuristics and questionnaires.
Due to the increasing complexity of software systems, explainability is becoming more important as a software quality aspect and non-functional requirement. Contemporary research has mainly focused on making artificial intelligence and its decision-making processes more understandable. However, explainability has also gained traction in recent requirements engineering research. The work aims to contribute to that body of research by providing a quality model for explainability as a software quality aspect. Quality models provide means and measures to specify and evaluate quality requirements. In order to design a user-centered quality model for explainability, we conducted a literature review. We identified ten fundamental aspects of explainability. Furthermore, we aggregated criteria and metrics to measure them. We present three types of metrics. We compiled 35 user-centered metrics and questionnaires with a collection of 69 questions which can be evaluated with user studies. Furthermore, we compiled 22 heuristics that can be applied through expert evaluation. We developed a concept for applying the quality model and evaluated it in a small user study with practitioners and researchers. Our quality model and the related means of evaluation enable software engineers to develop and validate explainable systems in accordance with their explainability goals and intentions. This is achieved by offering a view from different angles at fundamental aspects of explainability and the related development goals. Thus, we provide a foundation that improves the management and verification of explainability requirements.