ICST 2024
Mon 27 - Fri 31 May 2024 Canada
Mon 27 May 2024 14:00 - 14:30 at Room 1 - Session 3 Chair(s): Cristina Seceleanu

We propose extension of the concept of software correctness, and the associated task of software testing, to include interrogation and diagnosis, whereby actionable knowledge about the limitations and performance of the software is gleaned. The new paradigm is especially relevant for AI-based systems such as classifiers and large language models (LLMs) that involve interactions among complex software, underlying training/reference data and the analysis/input data. In this context, the need for both interrogation–to identify problematic outputs when there may be no measure of correctness–and diagnosis–to determine where the problem lies–is evident. To make our rhetoric concrete, we present two case studies. Classifier boundaries enable characterization of robustness of classifier output and fragility of inputs. Cliques in metagenomic assembly provide novel insight into the software as well as the potential for performance improvement.

Mon 27 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Session 3A-MOST at Room 1
Chair(s): Cristina Seceleanu Mälardalen University
14:00
30m
Full-paper
Active Model Learning for Software Interrogation and Diagnosis
A-MOST
Adam Porter University of Maryland, alan Karr
14:30
30m
Short-paper
Active Model Learning of Git Version Control System
A-MOST
Edi Muskardin , Tamim Burgstaller , Martin Tappler TU Wien, Austria, Bernhard Aichernig Graz University of Technology
15:00
30m
Full-paper
Bridging the Gap Between Models in RL: Test Models vs. Neural Networks
A-MOST
Martin Tappler TU Wien, Austria, Florian Lorber Silicon Austria Labs