ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Wed 30 Oct 2024 13:30 - 13:45 at Carr - Verification Chair(s): Tevfik Bultan

Loop invariant inference, a key component in program verification, is a challenging task due to the inherent undecidability and complex loop behaviors in practice. Recently, machine learning based techniques have demonstrated impressive performance in generating loop invariants automatically. However, these methods highly rely on the labeled training data, and are intrinsically random and uncertain, leading to unstable performance. In this paper, we investigate a synergy of large language models (LLMs) and bounded model checking (BMC) to address these issues. The key observation is that, although LLMs may not be able to return the correct loop invariant in one response, they usually can provide all individual predicates of the correct loop invariant in multiple responses. To this end, we propose a ``query-filter-reassemble'' strategy, namely, we first leverage the language generation power of LLMs to produce a set of candidate variants, where training data are not needed. Then, we employ BMC to identify valid predicates from these candidate invariants, which are assembled to produce new candidate invariants and checked by off-the-shelf SMT solvers. The feedback is incorporated into the prompt for the next round of LLM querying. We expand the existing benchmark of 133 programs to 316 programs, providing a more comprehensive testing ground. Experimental results demonstrate that our approach significantly outperforms the state-of-the-art techniques, successfully generating 309 loop invariants out of 316 cases, whereas the existing baseline methods are only able to tackle 218 programs at best.

Wed 30 Oct

Displayed time zone: Pacific Time (US & Canada) change

13:30 - 15:00
VerificationResearch Papers / Tool Demonstrations at Carr
Chair(s): Tevfik Bultan University of California at Santa Barbara
13:30
15m
Talk
LLM Meets Bounded Model Checking: Neuro-symbolic Loop Invariant InferenceACM SigSoft Distinguished Paper Award
Research Papers
Guangyuan Wu Nanjing University, Weining Cao Nanjing University, Yuan Yao Nanjing University, Hengfeng Wei State Key Laboratory for Novel Software Technology, Nanjing University, Taolue Chen Birkbeck, University of London, Xiaoxing Ma State Key Laboratory for Novel Software Technology, Nanjing University
13:45
15m
Talk
LLM-Generated Invariants for Bounded Model Checking Without Loop UnrollingACM SigSoft Distinguished Paper Award
Research Papers
Muhammad A. A. Pirzada The University of Manchester, Giles Reger University of Manchester, Ahmed Bhayat Independent Scholar, Lucas C. Cordeiro University of Manchester, UK and Federal University of Amazonas, Brazil
Link to publication DOI
14:00
15m
Talk
Proof Automation with Large Language Models
Research Papers
Minghai Lu Purdue University, Benjamin Delaware Purdue University, Tianyi Zhang Purdue University
Pre-print
14:15
15m
Talk
Verifying the Option Type With Rely-Guarantee Reasoning
Research Papers
James Yoo University of Washington, Michael D. Ernst University of Washington, René Just University of Washington
Link to publication DOI
14:30
10m
Talk
CoVeriTeam GUI: A No-Code Approach to Cooperative Software Verification
Tool Demonstrations
Thomas Lemberger LMU Munich, Henrik Wachowitz LMU Munich
14:40
10m
Talk
CoqPilot, a plugin for LLM-based generation of proofs
Tool Demonstrations
Andrei Kozyrev JetBrains Research, Constructor University Bremen, Gleb Solovev JetBrains Research, Constructor University Bremen, Nikita Khramov JetBrains Research, Constructor University Bremen, Anton Podkopaev JetBrains Research, Constructor University