Towards Understanding the Effectiveness of Large Language Models on Directed Test Input Generation
Automatic testing has garnered significant attention and success over the past few decades. Techniques such as unit testing and coverage-guided fuzzing have revealed numerous critical software bugs and vulnerabilities. However, a long-standing, formidable challenge for existing techniques is how to achieve higher testing coverage. Constraint-based techniques, such as symbolic execution and concolic testing, have been well-explored and integrated into the existing approaches. With the popularity of Large Language Models (LLMs), recent research efforts to design tailored prompts to generate inputs that can reach more uncovered target branches. However, the effectiveness of using LLMs for generating such directed inputs and the comparison with the proven constraint-based solutions has not been systematically explored.
To bridge this gap, we conduct the first systematic study on the mainstream LLMs and constraint-based tools for directed input generation with a comparative perspective. We find that LLMs such as ChatGPT are comparable to or even better than the constraint-based tools, succeeding in 43.40%-58.57% samples in our dataset. Meanwhile, there are also limitations for LLMs in certain scenarios such as sequential calculation, where constraint-based tools are in a position of strength. Based on these findings, we propose a simple yet effective method to combine these two types of tools and implement a prototype based on ChatGPT and constraint-based tools. Our evaluation shows that our approach can outperform the baselines by 1.4x to 2.3x relatively. We believe our study can provide novel insights into directed input generation using LLMs, and our findings are essential for future testing research.
Thu 31 OctDisplayed time zone: Pacific Time (US & Canada) change
10:30 - 12:00 | Test generationResearch Papers / Journal-first Papers at Gardenia Chair(s): Lingming Zhang University of Illinois at Urbana-Champaign | ||
10:30 15mTalk | Towards Understanding the Effectiveness of Large Language Models on Directed Test Input Generation Research Papers Zongze Jiang Huazhong University of Science and Technology, Ming Wen Huazhong University of Science and Technology, Jialun Cao Hong Kong University of Science and Technology, Xuanhua Shi Huazhong University of Science and Technology, Hai Jin Huazhong University of Science and Technology | ||
10:45 15mTalk | Distribution-aware Fairness Test Generation Journal-first Papers Sai Sathiesh Rajan Singapore University of Technology and Design, Singapore, Ezekiel Soremekun Royal Holloway, University of London, Yves Le Traon University of Luxembourg, Luxembourg, Sudipta Chattopadhyay Singapore University of Technology and Design | ||
11:00 15mTalk | Effective Unit Test Generation for Java Null Pointer Exceptions Research Papers Myungho Lee Korea University, Jiseong Bak Korea University, Seokhyeon Moon , Yoon-Chan Jhi Technology Research, Samsung SDS, Seoul, South Korea, Hakjoo Oh Korea University | ||
11:15 15mTalk | SlicePromptTest4J: High-coverage Test Generation using LLM via Method Slicing Research Papers Zejun Wang Peking University, Kaibo Liu Peking University, Ge Li Peking University, Zhi Jin Peking University | ||
11:30 15mTalk | DeepREST: Automated Test Case Generation for REST APIs Exploiting Deep Reinforcement Learning Research Papers Davide Corradini University of Verona, Zeno Montolli University of Verona, Michele Pasqua University of Verona, Mariano Ceccato University of Verona | ||
11:45 15mTalk | On the Evaluation of Large Language Models in Unit Test Generation Research Papers Lin Yang Tianjin University, Chen Yang Tianjin University, Shutao Gao Tianjin University, Weijing Wang College of Intelligence and Computing, Tianjin University, Bo Wang Beijing Jiaotong University, Qihao Zhu DeepSeek-AI, Xiao Chu Huawei Cloud Computing Co. Ltd., Jianyi Zhou Huawei Cloud Computing Technologies Co., Ltd., Guangtai Liang Huawei Cloud Computing Technologies, Qianxiang Wang Huawei Technologies Co., Ltd, Junjie Chen Tianjin University Pre-print |