FSE 2025
Mon 23 - Fri 27 June 2025 Trondheim, Norway
co-located with ISSTA 2025
Tue 24 Jun 2025 12:10 - 12:30 at Cosmos 3C - SE for LLM Chair(s): Hongyu Zhang

Large Language Models (LLMs) are prone to hallucinations, e.g., factually incorrect information, in their responses. These hallucinations present challenges for LLM-based applications that demand high factual accuracy. Existing hallucination detection methods primarily depend on external resources, which can suffer from issues such as low availability, incomplete coverage, privacy concerns, high latency, low reliability, and poor scalability. There are also methods depending on output probabilities, which are often inaccessible for closed-source LLMs like GPT models. This paper presents MetaQA, a self-contained hallucination detection approach that leverages metamorphic testing and prompt mutation. Unlike existing methods, MetaQA operates without any external resources and is compatible with both open-source and closed-source LLMs. MetaQA is based on the hypothesis that if an LLM’s response is a hallucination, the designed metamorphic relations will be violated. We compare MetaQA with the state-of-the-art zero-resource hallucination detection method, SelfCheckGPT, across multiple datasets, and on two open-source and two closed-source LLMs. Our results reveal that MetaQA outperforms SelfCheckGPT in terms of precision, recall, and f1 score. For the four LLMs we study, MetaQA outperforms SelfCheckGPT with a superiority margin ranging from 0.041 - 0.113 (for precision), 0.143 - 0.430 (for recall), and 0.154 - 0.368 (for F1-score). For instance, with Mistral-7B, MetaQA achieves an average F1-score of 0.435, compared to SelfCheckGPT’s F1-score of 0.205, representing an improvement rate of 112.2%. MetaQA also demonstrates superiority across all different categories of questions.

Tue 24 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 12:30
10:30
10m
Talk
Enhancing Code LLM Training with Programmer Attention
Ideas, Visions and Reflections
Yifan Zhang Vanderbilt University, Chen Huang Sichuan University, Zachary Karas Vanderbilt University, Thuy Dung Nguyen Vanderbilt University, Kevin Leach Vanderbilt University, Yu Huang Vanderbilt University
10:40
20m
Talk
Risk Assessment Framework for Code LLMs via Leveraging Internal States
Industry Papers
Yuheng Huang The University of Tokyo, Lei Ma The University of Tokyo & University of Alberta, Keizaburo Nishikino Fujitsu Limited, Takumi Akazaki Fujitsu Limited
11:00
20m
Talk
An Empirical Study of Issues in Large Language Model Training Systems
Industry Papers
Yanjie Gao Microsoft Research, Ruiming Lu Shanghai Jiao Tong University, Haoxiang Lin Microsoft Research, Yueguo Chen Renmin University of China
DOI
11:20
20m
Talk
Look Before You Leap: An Exploratory Study of Uncertainty Analysis for Large Language Models
Journal First
Yuheng Huang The University of Tokyo, Jiayang Song University of Alberta, Zhijie Wang University of Alberta, Shengming Zhao University of Alberta, Huaming Chen The University of Sydney, Felix Juefei-Xu New York University, Lei Ma The University of Tokyo & University of Alberta
Link to publication DOI Pre-print
11:40
10m
Talk
EvidenceBot: A Privacy-Preserving, Customizable RAG-Based Tool for Enhancing Large Language Model Interactions
Demonstrations
Nafiz Imtiaz Khan Department of Computer Science, University of California, Davis, Vladimir Filkov University of California at Davis, USA
11:50
20m
Talk
OpsEval: A Comprehensive Benchmark Suite for Evaluating Large Language Models’ Capability in IT Operations Domain
Industry Papers
Yuhe Liu Tsinghua University, Changhua Pei Computer Network Information Center at Chinese Academy of Sciences, Longlong Xu Tsinghua University, Bohan Chen Tsinghua University, Mingze Sun Tsinghua University, Zhirui Zhang Beijing University of Posts and Telecommunications, Yongqian Sun Nankai University, Shenglin Zhang Nankai University, Kun Wang Zhejiang University, Haiming Zhang Chinese Academy of Sciences, Jianhui Li Computer Network Information Center at Chinese Academy of Sciences, Gaogang Xie Computer Network Information Center at Chinese Academy of Sciences, Xidao Wen BizSeer, Xiaohui Nie Computer Network Information Center at Chinese Academy of Sciences, Minghua Ma Microsoft, Dan Pei Tsinghua University
12:10
20m
Talk
Hallucination Detection in Large Language Models with Metamorphic Relations
Research Papers
Borui Yang Beijing University of Posts ad Telecommunications, Md Afif Al Mamun University of Calgary, Jie M. Zhang King's College London, Gias Uddin York University, Canada
DOI

Information for Participants
Tue 24 Jun 2025 10:30 - 12:30 at Cosmos 3C - SE for LLM Chair(s): Hongyu Zhang
Info for room Cosmos 3C:

Cosmos 3C is the third room in the Cosmos 3 wing.

When facing the main Cosmos Hall, access to the Cosmos 3 wing is on the left, close to the stairs. The area is accessed through a large door with the number “3”, which will stay open during the event.

:
:
:
: