ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

The advances of large language models (LLMs) have paved the way for automated software vulnerability repair approaches, which iteratively refine the patch until it becomes plausible. Nevertheless, existing LLM-based vulnerability repair approaches face notable limitations: 1) they ignore the concern of locations that need to be patched but only concerning the repair content. 2) they lack quality assessment for generated candidate patches in the iterative process. To tackle the two limitations, we propose LoopRepair, an LLM-based approach that provides information about where should be patched first. Furthermore, LoopRepair improves the iterative repair strategy by assessing the quality of test-failing patches and selecting the best patch for the next iteration. We introduce two dimensions to assess the quality of patches: whether they introduce new vulnerabilities and the taint statement coverage. We evaluated LoopRepair on a real-world C/C++ vulnerability repair dataset VulnLoc+, which contains 40 vulnerabilities and their Proof-of-Vulnerability. The experimental results demonstrate that LoopRepair exhibits substantial improvements compared with the Neural Machine Translation (NMT)-based, Program Analysis-based, and LLM-based state-of-the-art vulnerability repair approaches. Specifically, LoopRepair is able to generate 27 plausible patches, which is comparable to or even 8 to 22 more plausible patches than the baselines. In terms of correct patch generation, LoopRepair repairs 8 to 13 additional vulnerabilities compared with existing approaches.