ICSE 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada
Thu 1 May 2025 12:15 - 12:30 at 215 - SE for AI 2 Chair(s): Grace Lewis

LLM-powered coding and development assistants have become prevalent to programmers’ workflows. However, concerns about the trustworthiness of LLMs for code persist despite their widespread use. Much of the existing research focused on either training or evaluation, raising questions about whether stakeholders in training and evaluation align in their understanding of model trustworthiness and whether they can move toward a unified direction. In this paper, we propose a vision for a unified trustworthiness auditing framework, DataTrust, which adopts a data-centric approach that synergistically emphasizes both training and evaluation data and their correlations. DataTrust aims to connect model trustworthiness indicators in evaluation with data quality indicators in training. It autonomously inspects training data and evaluates model trustworthiness using synthesized data, attributing potential causes from specific evaluation data to corresponding training data and refining indicator connections. Additionally, a trustworthiness arena powered by DataTrust will engage crowdsourced input and deliver quantitative outcomes. We outline the benefits that various stakeholders can gain from DataTrust and discuss the challenges and opportunities it presents.

Thu 1 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
SE for AI 2New Ideas and Emerging Results (NIER) / Research Track at 215
Chair(s): Grace Lewis Carnegie Mellon Software Engineering Institute
11:00
15m
Talk
Answering User Questions about Machine Learning Models through Standardized Model CardsSE for AI
Research Track
Tajkia Rahman Toma University of Alberta, Balreet Grewal University of Alberta, Cor-Paul Bezemer University of Alberta
Pre-print
11:15
15m
Talk
Fairness Testing through Extreme Value TheorySE for AI
Research Track
Verya Monjezi University of Texas at El Paso, Ashutosh Trivedi University of Colorado Boulder, Vladik Kreinovich University of Texas at El Paso, Saeid Tizpaz-Niari University of Illinois Chicago
11:30
15m
Talk
Fixing Large Language Models' Specification Misunderstanding for Better Code GenerationSE for AI
Research Track
Zhao Tian Tianjin University, Junjie Chen Tianjin University, Xiangyu Zhang Purdue University
Pre-print
11:45
15m
Talk
SOEN-101: Code Generation by Emulating Software Process Models Using Large Language Model AgentsSE for AI
Research Track
Feng Lin Concordia University, Dong Jae Kim DePaul University, Tse-Hsun (Peter) Chen Concordia University
12:00
15m
Talk
The Product Beyond the Model -- An Empirical Study of Repositories of Open-Source ML ProductsSE for AI
Research Track
Nadia Nahar Carnegie Mellon University, Haoran Zhang Carnegie Mellon University, Grace Lewis Carnegie Mellon Software Engineering Institute, Shurui Zhou University of Toronto, Christian Kästner Carnegie Mellon University
12:15
15m
Talk
Towards Trustworthy LLMs for Code: A Data-Centric Synergistic Auditing FrameworkSE for AI
New Ideas and Emerging Results (NIER)
Chong Wang Nanyang Technological University, Zhenpeng Chen Nanyang Technological University, Li Tianlin NTU, Yilun Zhang AIXpert, Yang Liu Nanyang Technological University