Adaptive Test Healing using LLM/GPT and Reinforcement Learning
Flaky tests disrupt software development pipelines by producing inconsistent results, undermining reliability and efficiency. This paper introduces a hybrid framework for adaptive test healing, combining Large Language Models (LLMs) like GPT with Reinforcement Learning (RL) to address test flakiness dynamically. LLMs analyze test logs to classify failures and extract contextual insights, while the RL agent learns optimal strategies for test retries, parameter tuning, and environment resets. Experimental results demonstrate the frameworkâs effectiveness in reducing flakiness and improving CI/CD pipeline stability, outperforming traditional approaches. This work paves the way for scalable, intelligent test automation in dynamic development environments.
Tue 1 AprDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
09:10 - 10:30 | |||
09:10 30mTalk | Adaptive Test Healing using LLM/GPT and Reinforcement Learning AIST | ||
09:40 30mTalk | A System for Automated Unit Test Generation Using Large Language Models and Assessment of Generated Test Suites AIST Andrea Lops Polytechnic University of Bari, Italy, Fedelucio Narducci Polytechnic University of Bari, Azzurra Ragone University of Bari, Michelantonio Trizio Wideverse, Claudio Bartolini Wideverse s.r.l. | ||
10:10 20mTalk | From Implemented to Expected Behaviors: Leveraging Regression Oracles for Non-Regression Fault Detection Using LLMs AIST Stefano Ruberto JRC European Commission, Judith Perera University of Auckland, Gunel Jahangirova King's College London, Valerio Terragni University of Auckland |