ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Thu 31 Oct 2024 11:45 - 12:00 at Gardenia - Test generation

Unit testing is an essential activity in software development for verifying the correctness of software components. However, manually writing unit tests is challenging and time-consuming. The emergence of Large Language Models (LLMs) offers a new direction for automating unit test generation. Existing research primarily focuses on closed-source LLMs (e.g., ChatGPT and CodeX) with fixed prompting strategies, leaving the capabilities of advanced open-source LLMs with various prompting settings unexplored. Particularly, open-source LLMs offer advantages in data privacy protection and have demonstrated superior performance in some tasks. Moreover, effective prompting is crucial for maximizing LLMs’ capabilities. In this paper, we conduct the first empirical study to fill this gap, based on 17 Java projects, five widely-used open-source LLMs with different structures and parameter sizes, and comprehensive evaluation metrics. Our findings highlight the significant influence of various prompt factors, show the performance of open-source LLMs compared to the commercial GPT-4 and the traditional Evosuite, and identify limitations in LLM-based unit test generation. We then derive a series of implications from our study to guide future research and practical use of LLM-based unit test generation.

This program is tentative and subject to change.

Thu 31 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
10:30
15m
Talk
Towards Understanding the Effectiveness of Large Language Models on Directed Test Input Generation
Research Papers
Zongze Jiang Huazhong University of Science and Technology, Ming Wen Huazhong University of Science and Technology, Jialun Cao Hong Kong University of Science and Technology, Xuanhua Shi Huazhong University of Science and Technology, Hai Jin Huazhong University of Science and Technology
10:45
15m
Talk
Distribution-aware Fairness Test Generation
Journal-first Papers
Sai Sathiesh Rajan Singapore University of Technology and Design, Singapore, Ezekiel Soremekun Royal Holloway, University of London, Yves Le Traon University of Luxembourg, Luxembourg, Sudipta Chattopadhyay Singapore University of Technology and Design
11:00
15m
Talk
Effective Unit Test Generation for Java Null Pointer Exceptions
Research Papers
Myungho Lee Korea University, Jiseong Bak Korea University, Seokhyeon Moon , Yoon-Chan Jhi Technology Research, Samsung SDS, Seoul, South Korea, Hakjoo Oh Korea University
11:15
15m
Talk
SlicePromptTest4J: High-coverage Test Generation using LLM via Method Slicing
Research Papers
Zejun Wang Peking University, Kaibo Liu Peking University, Ge Li Peking University, Zhi Jin Peking University
11:30
15m
Talk
DeepREST: Automated Test Case Generation for REST APIs Exploiting Deep Reinforcement Learning
Research Papers
Davide Corradini University of Verona, Zeno Montolli University of Verona, Michele Pasqua University of Verona, Mariano Ceccato University of Verona
11:45
15m
Talk
On the Evaluation of Large Language Models in Unit Test Generation
Research Papers
Lin Yang Tianjin University, Chen Yang Tianjin University, Shutao Gao Tianjin University, Weijing Wang College of Intelligence and Computing, Tianjin University, Bo Wang Beijing Jiaotong University, Qihao Zhu DeepSeek-AI, Xiao Chu Huawei Cloud Computing Co. Ltd., Jianyi Zhou Huawei Cloud Computing Technologies Co., Ltd., Guangtai Liang Huawei Cloud Computing Technologies, Qianxiang Wang Huawei Technologies Co., Ltd, Junjie Chen Tianjin University