ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal

In this study, we propose the early adoption of Explainable AI (XAI) with a focus on three properties: (1) Quality of explanation - the explanation summaries should be consistent across multiple XAI methods; (2) Architecture style - the integration of XAI methods and models under explanation should have compatible architecture style; (3) Configurable operations - XAI explanations are operable akin to machine learning operations. Thus, an explanation for AI models should be reproducible and tractable to be trustworthy.
We present XAIport, a framework of XAI microservices encapsulated into Open APIs to deliver early explanations as observation for learning model quality assurance. XAIport enables configurable XAI operations along with machine learning development. We quantify the operational costs of incorporating XAI with three cloud computer vision services on Microsoft Azure Cognitive Services, Google Cloud Vertex AI, and Amazon Rekognition. Our findings show comparable operational costs between XAI and traditional machine learning, with XAIport significantly improving both model performance and explainability.

Thu 18 Apr

Displayed time zone: Lisbon change

14:00 - 15:30
LLM, NN and other AI technologies 4Research Track / Industry Challenge Track / New Ideas and Emerging Results at Pequeno Auditório
Chair(s): David Nader Palacio William & Mary
14:00
15m
Talk
Programming Assistant for Exception Handling with CodeBERT
Research Track
Yuchen Cai University of Texas at Dallas, Aashish Yadavally University of Texas at Dallas, Abhishek Mishra University of Texas at Dallas, Genesis Montejo University of Texas at Dallas, Tien N. Nguyen University of Texas at Dallas
14:15
15m
Talk
An Empirical Study on Noisy Label Learning for Program Understanding
Research Track
Wenhan Wang Nanyang Technological University, Yanzhou Li Nanyang Technological University, Anran Li Nanyang Technological University, Jian Zhang Nanyang Technological University, Wei Ma Nanyang Technological University, Singapore, Yang Liu Nanyang Technological University
Pre-print
14:30
15m
Talk
An Empirical Study on Low GPU Utilization of Deep Learning Jobs
Research Track
Yanjie Gao Microsoft Research, yichen he , Xinze Li Microsoft Research, Bo Zhao Microsoft Research, Haoxiang Lin Microsoft Research, Yoyo Liang Microsoft, Jing Zhong Microsoft, Hongyu Zhang Chongqing University, Jingzhou Wang Microsoft Research, Yonghua Zeng Microsoft, Keli Gui Microsoft, Jie Tong Microsoft, Mao Yang Microsoft Research
DOI Pre-print
14:45
15m
Talk
Using an LLM to Help With Code Understanding
Research Track
Daye Nam Carnegie Mellon University, Andrew Macvean Google, Inc., Vincent J. Hellendoorn Carnegie Mellon University, Bogdan Vasilescu Carnegie Mellon University, Brad A. Myers Carnegie Mellon University
15:00
15m
Talk
MissConf: LLM-Enhanced Reproduction of Configuration-Triggered Bugs
Industry Challenge Track
Ying Fu National University of Defense Technology, Teng Wang National University of Defense Technology, Shanshan Li National University of Defense Technology, Jinyan Ding National University of Defense Technolog, Shulin Zhou National University of Defense Technology, Zhouyang Jia National University of Defense Technology, Wang Li National University of Defense Technology, Yu Jiang Tsinghua University, Liao Xiangke National University of Defense Technology
Media Attached File Attached
15:15
7m
Talk
XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development
New Ideas and Emerging Results
Zerui Wang Concordia University, Yan Liu Concordia University, Abishek Arumugam Thiruselvi Concordia University, Wahab Hamou-Lhadj Concordia University, Montreal, Canada
DOI Pre-print
15:22
7m
Talk
Which Syntactic Capabilities Are Statistically Learned by Masked Language Models for Code?
New Ideas and Emerging Results
Alejandro Velasco William & Mary, David Nader Palacio William & Mary, Daniel Rodriguez-Cardenas , Denys Poshyvanyk William & Mary
Pre-print