Test-based automated program repair (APR) has attracted huge attention from both industry and academia. Despite the significant progress made in recent studies, the overfitting problem (i.e., the generated patch is plausible but overfitting) is still a major and long-standing challenge. Therefore, plenty of automated techniques have been proposed to assess the correctness of patches either in the patch generation phase or in the evaluation of APR techniques. However, the effectiveness of the existing techniques has not been systematically compared and little is known to their advantages and disadvantages. To fill this gap, we performed a large-scale empirical study in this paper. Specifically, we systematically investigated the effectiveness of existing automated patch correctness assessment techniques, including both static and dynamic ones, based on 902 patches automatically generated by 21 APR tools from 4 different categories (the largest benchmark ever in the literature). Our empirical study revealed the following major findings: (1) static code features with respect to patch syntax and semantics are generally effective in differentiating overfitting patches over correct ones; (2) dynamic techniques can generally achieve high precision while heuristics based on static code features are more effective towards recall; (3) existing techniques are more effective towards certain projects and certain types of APR techniques while less effective to the others; (4) existing techniques are highly complementary to each other. A single technique can only detect at most 53.5% overfitting patches while 93.3% of the overfitting ones can be detected by at least one technique. Based on our findings, we designed an integration strategy to first integrate static code features via learning, and then combine with others by the majorrity voting strategy. Our experiments show that the strategy can enhance the performance of existing patch correctness assessment techniques significantly.
Thu 24 SepDisplayed time zone: (UTC) Coordinated Universal Time change
08:00 - 09:00 | |||
08:00 20mTalk | No Strings Attached: An Empirical Study of String-related Software Bugs Research Papers Pre-print File Attached | ||
08:20 20mResearch paper | Automated Patch Correctness Assessment: How Far are We? Research Papers Shangwen Wang National University of Defense Technology, Ming Wen Huazhong University of Science and Technology, China, Bo Lin National University of Defense Technology, Hongjun Wu National University of Defense Technology, Yihao Qin National University of Defense Technology, Deqing Zou Huazhong University of Science and Technology, Xiaoguang Mao National University of Defense Technology, Hai Jin Huazhong University of Science and Technology DOI Pre-print Media Attached | ||
08:40 20mResearch paper | Evaluating Representation Learning of Code Changes for Predicting Patch Correctness in Program Repair Research Papers Haoye Tian University of Luxembourg, Kui Liu University of Luxembourg, Luxembourg, Abdoul Kader Kaboré University of Luxembourg, Anil Koyuncu University of Luxembourg, Luxembourg, Li Li Monash University, Australia, Jacques Klein University of Luxembourg, Luxembourg, Tegawendé F. Bissyandé University of Luxembourg, Luxembourg |