Wed 3 Sep 2025 17:00 - 17:20 at Salon de Actos - Explainability and Ethics II Chair(s): Chetan Arora

Explainability is a critical enabler of trust, usability, and regulatory compliance in artificial intelligence (AI) systems. In Hybrid Intelligence (HI) systems, where human expertise and AI collaborate to make decisions, the need for explainability becomes even more pronounced. However, the complexity of explainability requirements varies significantly depending on the role AI plays within the system. From human-dominant systems, where AI provides supportive insights, to AI-dominant systems, where AI assumes autonomous decision-making, the nature and depth of explanations required shift across this spectrum. Despite its importance, there is a lack of systematic approaches for tailoring explainability to meet stakeholder needs across these varying levels of AI complexity. This paper identifies the key components that shape explainability in HI systems, addressing the distinct roles and interactions of human and AI agents, as well as the diverse needs of stakeholders. In addition, the paper introduces a conceptual framework to assist designers in integrating explainability as a Non-Functional Requirement into HI systems. The framework leverages Goal-Oriented Requirements Engineering to facilitate the incorporation of suitable explainability NFRs into the design of HI systems, enabling designers to address challenges such as enhancing transparency, fostering trust, and meeting diverse stakeholder expectations. This research provides a first approach for advancing the design of adaptable, accountable, and trustworthy HI systems, contributing valuable insights to the fields of requirements engineering, human-AI collaboration, and regulatory compliance.

Wed 3 Sep

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

16:00 - 17:40
Explainability and Ethics IIJournal-First / Research Papers / RE@Next! Papers at Salon de Actos
Chair(s): Chetan Arora Monash University
16:00
30m
Paper
How to Elicit Explainability Requirements? A Comparison of Interviews, Focus Groups, and Surveys
Research Papers
Martin Obaidi Leibniz Universität Hannover, Jakob Droste Leibniz Universität Hannover, Hannah Deters Leibniz University Hannover, Marc Herrmann Leibniz University Hannover, Jil Klünder University of Applied Sciences | FHDW Hannover, Kurt Schneider Leibniz Universität Hannover, Software Engineering Group, Raymond Ochsner Leibniz Universität Hannover
Pre-print
16:30
30m
Paper
Design Thinking In Requirements Engineering: Understanding The Role Of Internal And External Empathy
Research Papers
Ezequiel Kahan Universidad Nacional de Tres de Febrero, Marcela Fabiana Genero Bocco University of Castilla-La Mancha, Beatriz Bernárdez University of Seville, Alejandro Oliveros Universidad Nacional de Tres de Febrero
17:00
20m
Paper
Explainability Across the Spectrum: Modeling Stakeholder Goals Based on AI Complexity Levels
RE@Next! Papers
Antoni Mestre Gascón Universitat Politècnica de València, Manoli Albert Universitat Politecnica de Valencia, Miriam Gil Universidad de Valencia, Vicente Pelechano Universitat Politècnica de València
17:20
20m
Paper
ExplanaSC: A Framework for Determining Information Requirements for Explainable Blockchain Smart Contracts
Journal-First
Hanouf Al Ghanmi , Rami Bahsoon University of Birmingham