ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Tue 29 Oct 2024 11:40 - 11:50 at Magnoila - SE for AI 1

Machine learning (ML) models have become essential components in software systems across several domains, such as autonomous driving, healthcare, and finance. The robustness of these ML models is crucial for maintaining the software systems performance and reliability. A significant challenge arises when these systems encounter out-of-distribution (OOD) data, examples that differ from the training data distribution. OOD data can cause a degradation of the software systems performance. Therefore, an effective OOD detection mechanism is essential for maintaining software system performance and robustness. Such a mechanism should identify and reject OOD inputs and alert software engineers. Current OOD detection methods rely on hyperparameters tuned with in-distribution and OOD data. However, defining the OOD data that the system will encounter in production is often infeasible. Further, the performance of these methods degrades with OOD data that has similar characteristics to the in-distribution data. In this paper, we propose a novel OOD detection method using the Gini coefficient. Our method does not require prior knowledge of OOD data or hyperparameter tuning. On common benchmark datasets, we show that our method outperforms the existing maximum softmax probability (MSP) baseline. For a model trained on the MNIST dataset, we improve the OOD detection rate by 4% on the CIFAR10 dataset and by more than 50% for the EMNIST dataset.

This program is tentative and subject to change.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
10:30
15m
Talk
Evaluating Terminology Translation in Machine Translation Systems via Metamorphic Testing
Research Papers
Yihui Xu Soochow University, Yanhui Li Nanjing University, Jun Wang Nanjing University, Xiaofang Zhang Soochow University
10:45
15m
Talk
Mutual Learning-Based Framework for Enhancing Robustness of Code Models via Adversarial Training
Research Papers
Yangsen Wang Peking University, Yizhou Chen Peking University, Yifan Zhao Peking University, Zhihao Gong Peking University, Junjie Chen Tianjin University, Dan Hao Peking University
DOI Pre-print
11:00
15m
Talk
Supporting Safety Analysis of Image-processing DNNs through Clustering-based Approaches
Journal-first Papers
Mohammed Attaoui University of Luxembourg, Fabrizio Pastore University of Luxembourg, Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland
11:15
15m
Talk
Challenges and Practices of Deep Learning Model Reengineering: A Case Study on Computer Vision
Journal-first Papers
Wenxin Jiang Purdue University, Vishnu Banna Purdue University, Naveen Vivek Purdue University, Abhinav Goel Purdue University, Nicholas Synovic Loyola University Chicago, George K. Thiruvathukal Loyola University Chicago, James C. Davis Purdue University
Link to publication DOI Media Attached
11:30
10m
Talk
A Conceptual Framework for Quality Assurance of LLM-based Socio-critical Systems
NIER Track
Luciano Baresi Politecnico di Milano, Matteo Camilli Politecnico di Milano, Tommaso Dolci Politecnico di Milano, Giovanni Quattrocchi Politecnico di Milano
11:40
10m
Talk
Towards Robust ML-enabled Software Systems: Detecting Out-of-Distribution data using Gini Coefficients
NIER Track
Hala Abdelkader Applied Artificial Intelligence Institute, Deakin University, Jean-Guy Schneider Monash University, Mohamed Abdelrazek Deakin University, Australia, Priya Rani RMIT University, Rajesh Vasa Deakin University, Australia
11:50
10m
Talk
Attacks and Defenses for Large Language Models on Coding Tasks
NIER Track
Chi Zhang , Zifan Wang Center for AI Safety, Ruoshi Zhao Independent Researcher, Ravi Mangal Colorado State University, Matt Fredrikson Carnegie Mellon University, Limin Jia , Corina S. Păsăreanu Carnegie Mellon University; NASA Ames