ICST 2024
Mon 27 - Fri 31 May 2024 Canada

This program is tentative and subject to change.

Mon 27 May 2024 14:00 - 14:30 at Room 4 - Session 3

We propose extension of the concept of software correctness, and the associated task of software testing, to include interrogation and diagnosis, whereby actionable knowledge about the limitations and performance of the software is gleaned. The new paradigm is especially relevant for AI-based systems such as classifiers and large language models (LLMs) that involve interactions among complex software, underlying training/reference data and the analysis/input data. In this context, the need for both interrogation–to identify problematic outputs when there may be no measure of correctness–and diagnosis–to determine where the problem lies–is evident. To make our rhetoric concrete, we present two case studies. Classifier boundaries enable characterization of robustness of classifier output and fragility of inputs. Cliques in metagenomic assembly provide novel insight into the software as well as the potential for performance improvement.

This program is tentative and subject to change.

Mon 27 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Session 3A-MOST at Room 4
14:00
30m
Full-paper
Active Model Learning for Software Interrogation and Diagnosis
A-MOST
Adam Porter University of Maryland, alan Karr
14:30
30m
Short-paper
Active Model Learning of Git Version Control System
A-MOST
Edi Muskardin , Tamim Burgstaller , Martin Tappler TU Graz; Silicon Austria Labs, Bernhard Aichernig Graz University of Technology
15:00
30m
Full-paper
Bridging the Gap Between Models in RL: Test Models vs. Neural Networks
A-MOST
Martin Tappler TU Graz; Silicon Austria Labs, Florian Lorber Silicon Austria Labs