ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Wed 30 Oct 2024 14:15 - 14:30 at Carr - Verification Chair(s): Tevfik Bultan

Many programming languages include an implementation of the option type, which encodes the absence or presence of values. Incorrect use of the option type results in run-time errors, and unstylistic use results in unnecessary code. Researchers and practitioners have tried to mitigate the pitfalls of the option type, but have yet to evaluate tools for enforcing correctness and good style.

To address problems of correctness, we developed two modular verifiers that cooperate via a novel form of rely-guarantee reasoning; together, they verify use of the option type. We implemented them in the Optional Checker, an open-source static verifier. The Optional Checker is the first verifier for the option type based on a sound theory–that is, it issues a compile-time guarantee of the absence of run-time errors related to misuse of the option type. We then conducted the first mechanized study of tools that aim to prevent run-time errors related to the option type. We compared the performance of the Optional Checker, SpotBugs, Error Prone, and IntelliJ IDEA on a dataset comprising 1M non-comment, non-blank lines of open-source code. The Optional Checker found 13 previously-undiscovered bugs (a superset of those found by all other tools) and had the highest precision at 93%.

To address problems of style, we conducted a literature review of best practices for the option type. We discovered widely varying opinions about proper style. We implemented linting rules in the Optional Checker to detect violations of guidelines from within Oracle and discovered hundreds of violations. Some of these were objectively bad code, and others reflected different styles.

Wed 30 Oct

Displayed time zone: Pacific Time (US & Canada) change

13:30 - 15:00
VerificationResearch Papers / Tool Demonstrations at Carr
Chair(s): Tevfik Bultan University of California at Santa Barbara
13:30
15m
Talk
LLM Meets Bounded Model Checking: Neuro-symbolic Loop Invariant InferenceACM SigSoft Distinguished Paper Award
Research Papers
Guangyuan Wu Nanjing University, Weining Cao Nanjing University, Yuan Yao Nanjing University, Hengfeng Wei State Key Laboratory for Novel Software Technology, Nanjing University, Taolue Chen Birkbeck, University of London, Xiaoxing Ma State Key Laboratory for Novel Software Technology, Nanjing University
13:45
15m
Talk
LLM-Generated Invariants for Bounded Model Checking Without Loop UnrollingACM SigSoft Distinguished Paper Award
Research Papers
Muhammad A. A. Pirzada The University of Manchester, Giles Reger University of Manchester, Ahmed Bhayat Independent Scholar, Lucas C. Cordeiro University of Manchester, UK and Federal University of Amazonas, Brazil
Link to publication DOI
14:00
15m
Talk
Proof Automation with Large Language Models
Research Papers
Minghai Lu Purdue University, Benjamin Delaware Purdue University, Tianyi Zhang Purdue University
Pre-print
14:15
15m
Talk
Verifying the Option Type With Rely-Guarantee Reasoning
Research Papers
James Yoo University of Washington, Michael D. Ernst University of Washington, René Just University of Washington
Link to publication DOI
14:30
10m
Talk
CoVeriTeam GUI: A No-Code Approach to Cooperative Software Verification
Tool Demonstrations
Thomas Lemberger LMU Munich, Henrik Wachowitz LMU Munich
14:40
10m
Talk
CoqPilot, a plugin for LLM-based generation of proofs
Tool Demonstrations
Andrei Kozyrev JetBrains Research, Constructor University Bremen, Gleb Solovev JetBrains Research, Constructor University Bremen, Nikita Khramov JetBrains Research, Constructor University Bremen, Anton Podkopaev JetBrains Research, Constructor University