Quirx: A Mutation-Based Framework for Evaluating Prompt Robustness in LLM-based Software
This program is tentative and subject to change.
Large Language Models (LLMs) increasingly power critical business processes, yet prompt robustness remains underexplored. Small variations, such as synonym changes or instruction reordering, can cause significant output shifts, undermining reliability in domains like customer service and finance. Existing evaluations rely on ad-hoc manual testing, limiting scalability in production environments.
We present Quirx, a mutation-based fuzzing framework for systematically evaluating prompt robustness across LLM providers. Quirx applies tri-dimensional mutations (lexical, semantic, structural), executes them against target models, and measures response consistency via multi-level similarity analysis. It produces robustness scores, reveals failure patterns, and supports informed model selection.
We evaluate Quirx on four models (GPT-3.5-turbo, GPT-4o-mini, Claude-3.5-Sonnet, Claude-Sonnet-4) across three tasks. Results show sentiment classification is uniformly robust (1.00), summarization is highly provider-sensitive (0.23–0.58) with Claude models 2.5× more robust than OpenAI, and SQL generation is consistently strong (0.80–1.00). Structural mutations cause 50–67% of summarization failures but have minimal effect on other tasks.
PhD, Senior Researcher
This program is tentative and subject to change.
Tue 18 NovDisplayed time zone: Seoul change
15:00 - 18:00 | |||
15:00 3hDemonstration | APIDA-Chat: Structured Synthesis of API Search Dialogues to Bootstrap Conversational Agents Tool Demonstration Track | ||
15:00 3hDemonstration | PROXiFY: A Bytecode Analysis Tool for Detecting and Classifying Proxy Contracts in Ethereum Smart Contracts Tool Demonstration Track Ilham Qasse Reykjavik University, Mohammad Hamdaqa Polytechnique Montreal, Björn Þór Jónsson Reykjavik University | ||
15:00 3hDemonstration | DeepTx: Real-Time Transaction Risk Analysis via Multi-Modal Features and LLM Reasoning Tool Demonstration Track Yixuan Liu Nanyang Technological University, Xinlei Li Nanyang Technological University, Yi Li Nanyang Technological University Pre-print | ||
15:00 3hDemonstration | WIBE: Watermarks for generated Images - Benchmarking & Evaluation Tool Demonstration Track Aleksey Yakushev ISP RAS, Aleksandr Akimenkov ISP RAS, Khaled Abud MSU AI Institute, Dmitry Obydenkov ISP RAS, Irina Serzhenko MIPT, Kirill Aistov Huawei Research Center, Egor Kovalev MSU, Stanislav Fomin ISP RAS, Anastasia Antsiferova ISP RAS Research Center, MSU AI Institute, Kirill Lukianov ISP RAS Research Center, MIPT, Yury Markin ISP RAS | ||
15:00 3hDemonstration | EyeNav: Accessible Webpage Interaction and Testing using Eye-tracking and NLP Tool Demonstration Track Juan Diego Yepes-Parra Universidad de los Andes, Colombia, Camilo Escobar-Velásquez Universidad de los Andes, Colombia Link to publication Media Attached | ||
15:00 3hDemonstration | Quirx: A Mutation-Based Framework for Evaluating Prompt Robustness in LLM-based Software Tool Demonstration Track Souhaila Serbout University of Zurich, Zurich, Switzerland | ||
15:00 3hDemonstration | BenGQL: An Extensible Benchmarking Framework for Automated GraphQL Testing Tool Demonstration Track Media Attached | ||
15:00 3hDemonstration | evalSmarT: An LLM-Based Evaluation Framework for Smart Contract Comment Generation Tool Demonstration Track Fatou Ndiaye MBODJI SnT, University of Luxembourg, Mame Marieme Ciss SOUGOUFARA UCAD, Senegal, Wendkuuni Arzouma Marc Christian OUEDRAOGO SnT, University of Luxembourg, Alioune Diallo University of Luxembourg, Kui Liu Huawei, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg Pre-print | ||
15:00 3hDemonstration | LLMorph: Automated Metamorphic Testing of Large Language Models Tool Demonstration Track Steven Cho The University of Auckland, New Zealand, Stefano Ruberto JRC European Commission, Valerio Terragni University of Auckland | ||
15:00 3hDemonstration | TRUSTVIS: A Multi-Dimensional Trustworthiness Evaluation Framework for Large Language Models Tool Demonstration Track Ruoyu Sun University of Alberta, Canada, Da Song University of Alberta, Jiayang Song Macau University of Science and Technology, Yuheng Huang The University of Tokyo, Lei Ma The University of Tokyo & University of Alberta | ||
15:00 3hDemonstration | GUI-ReRank: Enhancing GUI Retrieval with Multi-Modal LLM-based Reranking Tool Demonstration Track Kristian Kolthoff Institute for Software and Systems Engineering, Clausthal University of Technology, Felix Kretzer human-centered systems Lab (h-lab), Karlsruhe Institute of Technology (KIT) , Christian Bartelt Institute for Software and Systems Engineering, TU Clausthal, Alexander Maedche human-centered systems Lab (h-lab), Karlsruhe Institute of Technology (KIT) , Simone Paolo Ponzetto Data and Web Science Group, University of Mannheim Pre-print Media Attached | ||
15:00 3hDemonstration | StackPlagger: A System for Identifying AI-Code Plagiarism on Stack Overflow Tool Demonstration Track Aman Swaraj Dept. of Computer Science & Engineering, Indian Institute of Technology, Roorkee, India, Harsh Goyal Indian Institute of Technology, Roorkee, Sumit Chadgal Indian Institute of Technology, Roorkee, Sandeep Kumar Dept. of Computer Science & Engineering, Indian Institute of Technology, Roorkee, India | ||
15:00 3hDemonstration | AgentDroid: A Multi-Agent Tool for Detecting Fraudulent Android Applications Tool Demonstration Track Ruwei Pan Chongqing University, Hongyu Zhang Chongqing University, Zhonghao Jiang , Ran Hou Chongqing University | ||
