Write a Blog >>
ICSE 2022
Sun 8 - Fri 27 May 2022

Test-based generate-and-validate automated program repair (APR) systems often generate plausible patches that pass the test suite without fixing the bug. So far, several approaches for automatic assessment of the APR-generated patches are proposed. Among them, dynamic patch correctness assessment relies on comparing run-time information obtained from the program before and after patching. Object similarity-based dynamic patch ranking approaches, specifically, capture system state snapshots after the impact point of patches and express behavior differences in term of object graphs similarities. All dynamic approaches rely on the assumption that, when running the originally passing test cases, the correct patches will not alter the program behavior in a significant way. Furthermore, most of them also assume that correct patches will significantly change program behavior for the failing test cases.

This paper presents the results of an extensive empirical study on two object similarity-based approaches (i.e., ObjSim${mh}$ and CIP) to rank 1,290 APR-generated patches, used in previous APR research. We found that although ObjSim${mh}$ outperforms CIP, in terms of the number of patches ranked in top-1 position, it still does not offer an improvement over random baseline ranking, representing the setting with no automatic patch correctness assessment in place. This observation warrants further research on the validity of the assumptions underlying these two techniques and the techniques based on similar assumptions.

Thu 19 May

Displayed time zone: Eastern Time (US & Canada) change

09:15 - 09:30
Revisiting Object Similarity-based Patch Ranking in Automated Program Repair: An Extensive StudyAPR at APR room
09:15
5m
Talk
Revisiting Object Similarity-based Patch Ranking in Automated Program Repair: An Extensive Study
APR
Ali Ghanbari Iowa State University
09:20
10m
Live Q&A
Q&A
APR