ICST 2023
Sun 16 - Thu 20 April 2023 Dublin, Ireland
Sun 16 Apr 2023 12:00 - 12:30 at Macken - ITEQS II

While AI as a cross-sector technology is applied to more and more use cases and contexts, concerns and questions about the impact of this use on the people involved and society as a whole are increasing. Many companies want to act on these concerns and put a high emphasize on the ethical and fair use of AI in their use cases. But up to date easily applicable methods to translate ethical values, such as fairness, into technical specifications are often missing. One method to do so previously described in literature is the development of an Assurance Case.

An Assurance Case presents the argument structure of how a claim (�The system is fair�) can be substantiated by evidences like tests. To test the application of the method for fairness requirements in the real world and derive important insights for future use, the method was tested with the real life use case of a software product, which assigns positions in the training of doctors in hospitals.

Fundamental questions were: Can the method of Assurance Cases be used to enhance fairness of an AI system? What is the application like in an industry context? How can one improve the method of Assurance Cases to be applicable for the assessment of fairness in AI systems?

Together with developers, software and domain experts as well as AI and Open Innovation experts and researchers, the method was applied to the use case of an industry partner.

Key insights are that the developed Assurance Case is of great help to the industry partner, especially when thinking about future adaptions, communication and potential regulations or required evidences to support their claim of a fair system. Based on the insights and learnings derived from testing the process, the method can be improved, enabling it to be applied more efficiently on future use cases and thus enhancing the fairness of AI systems in the long term.

Based on our experience, we consider the Assurance Case framework to be helpful and useful to be able to assure fairness of AI systems.

Sun 16 Apr

Displayed time zone: Dublin change

11:00 - 12:30
ITEQS IIITEQS at Macken
11:00
30m
Talk
Preliminary results in using attention for increasing attack identification efficiency
ITEQS
Tanwir Ahmad Åbo Akademi University, Dragos Truscan Åbo Akademi University, Jüri Vain Tallinn University of Technology, Estonia
11:30
30m
Talk
Lightweight Method for On-the-fly Detection of Multivariable Atomicity Violations (Best Paper)Best Paper Award
ITEQS
Changhui Bae Gyeongsang National University, Euteum Choi Gyeongsang National Unviersity, Yong-Kee Jun Gyeongsang National University, Ok-Kyoon Ha Kyungwoon University
12:00
30m
Talk
Using Assurance Cases to assure the fulfillment of non-functional requirements of AI-based systems - Lessons learned
ITEQS
Marc Hauer TU Kaiserslautern - Algorithm Accountability Lab , Lena Müller-Kress winnovation consulting gmbh, Gertraud Leimüller winnovation consulting gmbh & leiwand.ai gmbh, Katharina Zweig TU Kaiserslautern - Algorithm Accountability Lab