TCSE logo 
 Sigsoft logo
Sustainability badge

This program is tentative and subject to change.

Wed 30 Apr 2025 16:30 - 16:45 at 214 - AI for Testing and QA 2

Flaky tests, characterized by inconsistent results across repeated executions, present significant challenges in software testing, especially during regression testing. Recently, there has been emerging research interest in non-idempotent-outcome (NIO) flaky tests—tests that pass on the initial run but fail on subsequent executions within the same environment. Despite progress in utilizing Large Language Models (LLMs) to address flaky tests, existing methods have not tackled NIO flaky tests. The limited context window of LLMs restricts their ability to incorporate relevant source code beyond the test method itself, often overlooking crucial information needed to address state pollution, which is the root cause of NIO flakiness.

This paper introduces NIODebugger, the first framework to utilize an LLM-based agent for fixing flaky tests. NIODebugger features a three-phase design: detection, exploration, and fixing. In the detection phase, dynamic analysis provides critical information (such as stack traces and custom test execution logs) from multiple test runs, which helps in understanding accumulative state pollution. During the exploration phase, the LLM-based agent identifies and provides instructions for extracting relevant source code associated with test flakiness. In the fixing phase, NIODebugger repairs the tests using the information gathered from the previous phases. NIODebugger can be integrated with multiple LLMs, achieving patching success rates ranging from 11.63% to 58.72%. Its best-performing variant, NIODebugger-GPT-4, successfully generated correct patches for 101 out of 172 previously unknown NIO tests across 20 large-scale open-source projects. We submitted pull requests for all generated patches; 58 have been merged, only 1 was rejected, and the remaining 42 are pending. The implementation of NIODebugger is provided as a Maven plugin accessible at https://github.com/NIOTester/NIODebugger.

This program is tentative and subject to change.

Wed 30 Apr

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 17:30
AI for Testing and QA 2Research Track / SE In Practice (SEIP) at 214
16:00
15m
Talk
Faster Configuration Performance Bug Testing with Neural Dual-level Prioritization
Research Track
Youpeng Ma University of Electronic Science and Technology of China, Tao Chen University of Birmingham, Ke Li University of Exeter
16:15
15m
Talk
Metamorphic-Based Many-Objective Distillation of LLMs for Code-related Tasks
Research Track
Annibale Panichella Delft University of Technology
16:30
15m
Talk
NIODebugger: A Novel Approach to Repair Non-Idempotent-Outcome Tests with LLM-Based Agent
Research Track
Kaiyao Ke University of Illinois at Urbana-Champaign
16:45
15m
Talk
Test Intention Guided LLM-based Unit Test Generation
Research Track
Zifan Nan Huawei, Zhaoqiang Guo Software Engineering Application Technology Lab, Huawei, China, Kui Liu Huawei, Xin Xia Huawei
17:00
15m
Talk
What You See Is What You Get: Attention-based Self-guided Automatic Unit Test Generation
Research Track
Xin Yin Zhejiang University, Chao Ni Zhejiang University, xiaodanxu College of Computer Science and Technology, Zhejiang university, Xiaohu Yang Zhejiang University
17:15
15m
Talk
Improving Code Performance Using LLMs in Zero-Shot: RAPGen
SE In Practice (SEIP)
Spandan Garg Microsoft Corporation, Roshanak Zilouchian Moghaddam Microsoft, Neel Sundaresan Microsoft
:
:
:
: