Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction
Fri 19 May 2023 14:00 - 14:15 at Meeting Room 110 - Issue reporting and reproduction Chair(s): Daniel Russo
Many automated test generation techniques have been developed to aid developers with writing tests. To facilitate full automation, most existing techniques aim to either increase coverage, or generate exploratory inputs. However, existing test generation techniques largely fall short of achieving more semantic objectives, such as generating tests to reproduce a given bug report. Reproducing bugs is nonetheless important, as our empirical study shows that the number of tests added in open source repositories due to issues was about 28% of the corresponding project test suite size. Meanwhile, due to the difficulties of transforming the expected program semantics in bug reports into test oracles, existing failure reproduction techniques tend to deal exclusively with program crashes, a small subset of all bug reports. To automate test generation from general bug reports, we propose LIBRO, a framework that uses Large Language Models (LLMs), which have been shown to be capable of performing code-related tasks. Since LLMs themselves cannot execute the target buggy code, we focus on post-processing steps that help us discern when LLMs are effective, and rank the produced tests according to their validity. Our evaluation of LIBRO shows that, on the widely studied Defects4J benchmark, LIBRO can generate failure reproducing test cases for 33% of all studied cases (251 out of 750), while suggesting a bug reproducing test in first place for 149 bugs. To mitigate data contamination (i.e., the possibility of the LLM simply remembering the test code either partially or in whole), we also evaluate LIBRO against 31 bug reports submitted after the collection of the LLM training data terminated: LIBRO produces bug reproducing tests for 32% of the studied bug reports. Overall, our results show LIBRO has the potential to significantly enhance developer efficiency by automatically generating tests from bug reports.
Wed 17 MayDisplayed time zone: Hobart change
Fri 19 MayDisplayed time zone: Hobart change
13:45 - 15:15 | Issue reporting and reproductionTechnical Track / DEMO - Demonstrations at Meeting Room 110 Chair(s): Daniel Russo Department of Computer Science, Aalborg University | ||
13:45 15mTalk | Incident-aware Duplicate Ticket Aggregation for Cloud Systems Technical Track Jinyang Liu The Chinese University of Hong Kong, Shilin He Microsoft Research, Zhuangbin Chen Chinese University of Hong Kong, China, Liqun Li Microsoft Research, Yu Kang Microsoft Research, Xu Zhang Microsoft Research, Pinjia He Chinese University of Hong Kong at Shenzhen, Hongyu Zhang The University of Newcastle, Qingwei Lin Microsoft Research, Zhangwei Xu Microsoft Azure, Saravan Rajmohan Microsoft 365, Dongmei Zhang Microsoft Research, Michael Lyu The Chinese University of Hong Kong | ||
14:00 15mTalk | Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction Technical Track Pre-print | ||
14:15 15mTalk | On the Reproducibility of Software Defect Datasets Technical Track | ||
14:30 15mTalk | Context-aware Bug Reproduction for Mobile Apps Technical Track Yuchao Huang , Junjie Wang Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Zhe Liu Institute of Software, Chinese Academy of Sciences, Song Wang York University, Chunyang Chen Monash University, Mingyang Li Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Qing Wang Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences | ||
14:45 15mTalk | Read It, Don't Watch It: Captioning Bug Recordings Automatically Technical Track Sidong Feng Monash University, Mulong Xie Australian National University, Yinxing Xue University of Science and Technology of China, Chunyang Chen Monash University Pre-print | ||
15:00 7mTalk | BURT: A Chatbot for Interactive Bug Reporting DEMO - Demonstrations Yang Song College of William and Mary, Junayed Mahmud George Mason University, Nadeeshan De Silva William & Mary, Ying Zhou University of Texas at Dallas, Oscar Chaparro College of William and Mary, Kevin Moran George Mason University, Andrian Marcus University of Texas at Dallas, Denys Poshyvanyk College of William and Mary |