Wed 26 Jun 2024 13:45 - 14:15 at M101 - Explainability Chair(s): Krzysztof Wnuk

With the rise of artificial intelligence in industry, many companies rely on machine learning methods such as time series forecasting. By processing data from the past, such systems can provide predictions for data in the future. In practice, however, there is often skepticism about the quality of the forecasts. Explainability has been identified as a means to address this skepticism and foster trust. While there are already different methods to explain time series forecasts, it is unclear which of these explanations are actually useful for stakeholders. To investigate the need for explanations for time series forecasts, we conducted a study at a mid-sized German company in the energy domain. Throughout the study, 23 participants were shown five examples of different explanation types. For each type of explanation, we tested if it actually helped our participants to better understand the forecasts. We found that visual explanations including decision trees and feature importance charts were able to improve domain experts’ understanding of time series forecasts. Textual explanations tended to lead to confusion rather than empowerment. While the exact findings and preferable types of explanations may vary between companies, our concrete results can provide a starting point for in-depth analyses in other environments.

Wed 26 Jun

Displayed time zone: (UTC) Coordinated Universal Time change

13:45 - 15:15
ExplainabilityResearch Papers / RE@Next! Papers / Industrial Innovation Papers at M101
Chair(s): Krzysztof Wnuk Blekinge Institute of Technology 
13:45
30m
Paper
Explainability Requirements for Time Series Forecasts: A Study in the Energy Domain
Industrial Innovation Papers
Jakob Droste Leibniz Universität Hannover, Ronja Fuchs Kraftwerk Kraft-Wärme-Kopplung GmbH, Hannah Deters Leibniz University Hannover, Jil Klünder Leibniz Universität Hannover, Kurt Schneider Leibniz Universität Hannover, Software Engineering Group
14:15
30m
Paper
Explainability as a Requirement for Hardware: Introducing Explainable Hardware (XHW)
RE@Next! Papers
Timo Speith University of Bayreuth, Julian Speith Max Planck Institute for Security and Privacy (MPI-SP), Steffen Becker , Yixin Zou , Asia Biega , Christof Paar
Link to publication DOI Pre-print
14:45
30m
Paper
Explanations in Everyday Software Systems: Towards a Taxonomy for Explainability Needs
Research Papers
Jakob Droste Leibniz Universität Hannover, Hannah Deters Leibniz University Hannover, Martin Obaidi Leibniz Universität Hannover, Kurt Schneider Leibniz Universität Hannover, Software Engineering Group
Pre-print