ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Thu 31 Oct 2024 16:15 - 16:30 at Magnoila - SE for AI 3 Chair(s): Nafiz Imtiaz Khan

Large language models (LLMs) have achieved unprecedented success in the field of natural language processing. However, the black-box nature of their internal mechanisms has brought many concerns about their trustworthiness and interpretability. Recent research has discovered a class of abnormal tokens in the model’s vocabulary space and named them “glitch tokens”. Those tokens, once included in the input, may induce the model to produce incorrect, irrelevant, or even harmful results, drastically undermining the reliability and practicality of LLMs.

In this work, we aim to enhance the understanding of glitch tokens and propose techniques for their detection and mitigation. We first reveal the characteristic features induced by glitch tokens on LLMs, which are evidenced by significant deviations in the distributions of attention patterns and dynamic information from intermediate model layers. Based on the insights, we develop GlitchProber, a tool for efficient glitch token detection and mitigation. GlitchProber utilizes small-scale sampling, principal component analysis for accelerated feature extraction, and a simple classifier for efficient vocabulary screening. Taking one step further, GlitchProber rectifies abnormal model intermediate layer values to mitigate the destructive effects of glitch tokens. Evaluated on five mainstream open-source LLMs, GlitchProber demonstrates higher efficiency, precision, and recall compared to existing approaches, with an average F1 score of 0.86 and an average repair rate of 50.06%. GlitchProber unveils a novel path to address the challenges posed by glitch tokens and inspires future research toward more robust and interpretable LLMs.

Thu 31 Oct

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:30
SE for AI 3Research Papers at Magnoila
Chair(s): Nafiz Imtiaz Khan Department of Computer Science, University of California, Davis
15:30
15m
Talk
DevMuT: Testing Deep Learning Framework via Developer Expertise-Based Mutation
Research Papers
Yanzhou Mu , Juan Zhai University of Massachusetts at Amherst, Chunrong Fang Nanjing University, Xiang Chen Nantong University, Zhixiang Cao Xi'an Jiaotong University, Peiran Yang State Key Laboratory for Novel Software Technology, Nanjing University, China, Yinglong Zou Nanjing University, Tao Zheng Nanjing University, Zhenyu Chen Nanjing University
15:45
15m
Talk
Mutation-Based Deep Learning Framework Testing Method in JavaScript Environment
Research Papers
Yinglong Zou Nanjing University, Juan Zhai University of Massachusetts at Amherst, Chunrong Fang Nanjing University, Jiawei Liu University of Illinois at Urbana-Champaign, Tao Zheng Nanjing University, Zhenyu Chen Nanjing University
16:00
15m
Talk
DynaMO: Protecting Mobile DL Models through Coupling Obfuscated DL Operators
Research Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Xiao Chen University of Newcastle, Chunyang Chen TU Munich, John Grundy Monash University, Li Li Beihang University
16:15
15m
Talk
GlitchProber: Advancing Effective Detection and Mitigation of Glitch Tokens in Large Language Models
Research Papers
Zhibo Zhang Huazhong University of Science and Technology, Wuxia Bai Huazhong University of Science and Technology, Yuxi Li Huazhong University of Science and Technology, Mark Huasong Meng National University of Singapore, Kailong Wang Huazhong University of Science and Technology, Ling Shi Nanyang Technological University, Li Li Beihang University, Jun Wang Post Luxembourg, Haoyu Wang Huazhong University of Science and Technology