Active Model Learning for Software Interrogation and Diagnosis
We propose extension of the concept of software correctness, and the associated task of software testing, to include interrogation and diagnosis, whereby actionable knowledge about the limitations and performance of the software is gleaned. The new paradigm is especially relevant for AI-based systems such as classifiers and large language models (LLMs) that involve interactions among complex software, underlying training/reference data and the analysis/input data. In this context, the need for both interrogation–to identify problematic outputs when there may be no measure of correctness–and diagnosis–to determine where the problem lies–is evident. To make our rhetoric concrete, we present two case studies. Classifier boundaries enable characterization of robustness of classifier output and fragility of inputs. Cliques in metagenomic assembly provide novel insight into the software as well as the potential for performance improvement.
Mon 27 MayDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | |||
14:00 30mFull-paper | Active Model Learning for Software Interrogation and Diagnosis A-MOST | ||
14:30 30mShort-paper | Active Model Learning of Git Version Control System A-MOST Edi Muskardin , Tamim Burgstaller , Martin Tappler TU Wien, Austria, Bernhard Aichernig Graz University of Technology | ||
15:00 30mFull-paper | Bridging the Gap Between Models in RL: Test Models vs. Neural Networks A-MOST |