26th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2025)VMCAI 2025
Welcome to the website of the 26th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2025).
VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas. VMCAI 2025 will be the 25th edition in the series.
VMCAI will take place during January 20-21, 2025 as a physical (in-person) event. For each accepted paper at least one author is required to register for the conference and present the paper in person.
Sponsorship
VMCAI is welcoming diamond, silver and bronze sponsors.
Call for Papers
VMCAI 2025 is the 26th International Conference on Verification, Model Checking, and Abstract Interpretation. The conference will be held during January 20-21, 2025. VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas.
Scope
The program will consist of refereed research papers as well as invited lectures and tutorials. Research contributions can report new results as well as experimental evaluations and comparisons of existing techniques.
Topics include, but are not limited to:
- Program Verification
- Model Checking
- Abstract Interpretation
- Abstract Domains
- Program Synthesis
- Static Analysis
- Type Systems
- Deductive Methods
- Program Logics
- First-Order Theories
- Decision Procedures
- Interpolation
- Horn Clause Solving
- Program Certification
- Separation Logic
- Probabilistic Programming and Analysis
- Error Diagnosis
- Detection of Bugs and Security Vulnerabilities
- Program Transformations
- Hybrid and Cyber-physical Systems
- Concurrent and distributed Systems
- Analysis of numerical properties
- Analysis of smart contracts
- Analysis of neural networks
- Case Studies on all of the above topics
Submissions can address any programming paradigm, including concurrent, constraint, functional, imperative, logic, and object-oriented programming.
Important Dates AoE (UTC-12)
- October 1, 2024 AOE : Paper submission deadline (extended).
- November 11, 2024: Notification
- November 22, 2024: Camera-ready version due
Submissions
Submissions are required to follow Springer’s LNCS format. The page limit depends on the paper’s category (see below). In each category, additional material beyond the page limit may be placed in a clearly marked appendix, to be read at the discretion of the reviewers and to be omitted in the final version. Formatting style files and further guidelines for formatting can be found at the Springer website. Submission is via EasyChair. Note: submissions will open on August 26th.
All accepted papers will be published in Springer’s Lecture Notes in Computer Science series. The corresponding author of each paper will need to complete and sign a License-to-Publish form to be submitted together with the camera-ready version.
Submissions will undergo a single-blind review process. Submissions should not be anonymized for the purposes of review. There will be three categories of papers: regular papers, tool papers and case studies. Papers in each category have a different page limit and will be evaluated differently.
Regular papers clearly identify and justify an advance to the field of verification, abstract interpretation, or model checking. Where applicable, they are supported by experimental validation. Regular papers are restricted to 20 pages in LNCS format, not counting references.
Tool papers present a new tool, a new tool component, or novel extensions to an existing tool. They should provide a short description of the theoretical foundations with relevant citations, and emphasize the design and implementation concerns, including software architecture and core data structures. A regular tool paper should give a clear account of the tool’s functionality, discuss the tool’s practical capabilities with reference to the type and size of problems it can handle, describe experience with realistic case studies, and where applicable, provide a rigorous experimental evaluation. Papers that present extensions to existing tools should clearly focus on the improvements or extensions with respect to previously published versions of the tool, preferably substantiated by data on enhancements in terms of resources and capabilities. Authors are strongly encouraged to make their tools publicly available and submit an artifact. Tool papers are restricted to 12 pages in LNCS format, not counting references.
Case studies are expected to describe the use of verification, model checking, and abstract interpretation techniques in new application domains or industrial settings. Papers in this category do not necessarily need to present original research results but are expected to contain novel applications of formal methods techniques as well as an evaluation of these techniques in the chosen application domain. Such papers are encouraged to discuss the unique challenges of transferring research ideas to a real-world setting and reflect on any lessons learned from this technology transfer experience. Case study papers are restricted to 20 pages in LNCS format, not counting references. (Shorter case study papers are also welcome.)
Call for Artifacts
VMCAI 2025 makes available the option to submit an artifact along with a paper. Artifacts are any additional material that substantiates the claims made in the paper, and ideally makes them fully replicable. For some papers, these artifacts are as important as the paper itself because they provide crucial evidence for the quality of the results. The goal of artifact evaluation is twofold. On the one hand, we want to encourage authors to provide more substantial evidence to their papers and to reward authors who create artifacts. On the other hand, we want to simplify the independent replication of results presented in the paper and to ease future comparison with existing approaches. Artifacts of interest include (but are not limited to):
- Software, Tools, or Frameworks
- Data sets
- Test suites
- Machine checkable proofs
- Any combination of them
- Any other artifact described in the paper
Artifact submission is optional. However, we highly encourage all authors to also submit an artifact. A successfully evaluated artifact can increase your chance of being accepted since the evaluation result of your artifact is taken into account during paper reviewing.
Artifact evaluation is single blind, artifacts should not be anonymous.
Badges
We award three types of badges, following the ACM guidelines:
- Available for artifacts that are publicly available under a DOI.
- Functional for artifacts that are successfully evaluated by the artifact evaluation committee.
- Reusable for functional artifacts that exceed in quality and additionally provide careful documentation to allow for future repurposing.
Important Dates AoE (UTC-12)
The artifact evaluation will be done in parallel with the evaluation of the submitted paper. Since VMCAI has a tight schedule, we ask authors who intend to submit their artifact for evaluation to register it in advance, so that on the artifact submission deadline, the AE committee members are already ready to start the test phase.
What | When |
---|---|
Artifact Registration | October 1, 2024. |
Artifact submission deadline | October 3, 2024 |
Artifact test phase notification | October 10, 2024 |
Artifact notification | November 8, 2024 |
All artifacts are evaluated by the artifact evaluation committee. Each artifact will be reviewed by at least two committee members. Reviewers will read the paper and explore the artifact to evaluate how well the artifact supports the claims and results of the paper. The evaluation is based on the following questions.
- Is the artifact consistent with the paper and the claims made by the paper?
- Are the results of the paper replicable through the artifact?
- Is the artifact complete, i.e., how many of the results of the paper are replicable?
- Is the artifact well-documented?
- Is the artifact easy to use?
The artifact evaluation is performed in the following two phases.
- In the test phase, reviewers check if the artifact is functional, i.e., they look for setup problems (e.g., corrupted, missing files, crashes on simple examples, etc.). If any problems are detected, the authors are informed of the outcome and asked for clarification. The authors will get 3 days to respond to the reviews in case problems are encountered.
- In the assessment phase, reviewers will try to reproduce any experiments or activities and evaluate the artifact wrt. the questions detailed above.
Artifact Submission
Submission page: https://easychair.org/conferences/?conf=vmcai2025 (Choose the option to submit to the artifact evaluation track).
An artifact submission consist of:
- the title
[artifact-NN] XX
whereXX
is the title of your paper andNN
is its submission number, as assigned by Easychair in the VMCAI’25 paper submission page, - an abstract that summarizes the artifact and explains its relation to the paper,
- the URL (preferably a DOI) from which a
.zip
or.tar.gz
archive file containing the artifact can be downloaded, - the
SHA256 checksum
of the archive file, and - the
.pdf
file of the submitted paper.
The URL must point to independent services (such as Zenodo, Figshare, Microsoft OneDrive, Google Drive, Dropbox, Github, Bitbucket, or (public) Gitlab), to help protect the anonymity of the reviewers.
You can generate the checksum using the following command-line tools:
- Linux:
sha256sum <file>
- Windows:
CertUtil -hashfile <file> SHA256
- MacOS:
shasum -a 256 <file>
Submission Contents
Your artifact archive file must contain the following:
- The main artifact, i.e., data, software, libraries, scripts, etc. required to replicate the results of your paper.
- We recommend you provide easy to use scripts for the user to reproduce and analyze the results, i.e., scripts to run tools with the intended configuration and scripts to visualize their outcome as tables/plots.
- The review will be singly blind. Please make sure that you do not (accidentally) learn the identify of the reviewers (e.g., through analytics, logging).
- The logs from your experiments.
- With the logs and the appropriate analysis script, we can easily match the outcomes reported in the paper.
- A
LICENSE
file. Your license needs to allow the artifact evaluation chairs to download and distribute the artifact to the artifact evaluation committee members and the artifact evaluation committee members must be allowed to evaluate the artifact, e.g., use, execute, and modify the artifact for the purpose of artifact evaluation. - A
README.pdf
that introduces the artifact to the user and guides the user through the replication of your results. Ideally, it should consist of the following parts:- A Getting Started Guide detailing how to set up the artifact and prepare it for replicating the results from the paper. We would appreciate it if you supported the reviewers not only for the main review phase but also for the testing phase. To this end, it would be helpful to provide instructions that allow installation and rudimentary testing (i.e., in such a way that technical difficulties would pop up) in as little time as possible, together with the expected outcome of such rudimentary testing (to know whether it was successful).
- Step-by-Step Instructions for replicating the results from the paper.
- Please document which claims or results of the paper can be replicated with the artifact and how (e.g., which experiment must be performed). Please also explain which claims and results cannot be replicated and why.
- Describe in detail how to replicate the results in the paper, especially describe the steps that need to be performed to replicate the results in the paper. To simplify the reviewing process, we recommend to provide evaluation scripts (where applicable).
- Precisely state the resource requirements (RAM, number of cores, CPU frequency, etc.), which you used to test your artifact.
- Provide a rough estimate of the time needed to run the experiments.
- For tasks that require a large amount of resources (hardware or time), we recommend to provide a possibility to replicate a subset of the results with reasonably modest resource and time limits, e.g., within 8 hours on a reasonable personal computer equipped with 8 GB of RAM. In this case, please also include a script to replicate only a subset of the results.
Artifact Packaging Guidelines
The preferred way to package your artifact is a Docker image. This is to ensure that your artifact can be run on both traditional x86
(e.g. AMD and Intel processors) and newer arm
(e.g. Apple Silicon) platforms. To ensure that your artifact performs similarly on x86
and arm
, we recommend you submit one of the following:
- a Docker build script (i.e. a
Dockerfile
) if building the Docker image runs in reasonable time, or - pre-compiled Docker images for
x86
andarm
platforms.
Beware: if you provide only an x86
image then Docker will emulate it on arm
platforms, and vice versa. Emulation will significantly impair performance.
Docker Images
Below is a quick start guide for producing Docker images provided you have an adequate Dockerfile
to generate you artifact. Please refer to the official Docker documentation, in particular the one for cross-compilation for further details and troubleshooting.
To create and export a Docker image native to your platform, you can do:
docker build . -t <artifact-name>:latest
docker save -o <artifact-name>_<your_arch>.tar <artifact-name>:latest
For cross-compilation, Docker version 19.03
or newer provides buildx
on most platforms. First, create a new cross-compile builder:
docker buildx create --name mybuilder --driver docker-container --bootstrap
docker buildx use mybuilder
Then, cross-compile and export a Docker image:
docker buildx build --platform linux/<other_arch> -t <artifact-name>:latest --load .
docker save -o <artifact-name>_<other_arch>.tar <artifact-name>:latest
We recommend that you compress the resulting Docker images to reduce the overall size of you artifact. Please use a compression that is compatible with Docker load
, i.e., please use either gz
or xz
.
Virtual Machines
We discourage to package artifacts as virtual machine images as they are tricky if not impossible to run under MacOS running on Apple Silicon. However, should you be unable to use Docker, we will accept virtual machines compatible with VirtualBox. Please indicate the use of a virtual machine image and the platform it is intended for in the abstract of your submission.
Publication of Artifacts
The artifact evaluation committee uses the submitted artifact only for the artifact evaluation. It may not publicize the artifact or any parts of it during or after completing evaluation. Artifacts and all associated data will be deleted at the end of the evaluation process. We encourage the authors of artifacts to make their artifacts also permanently available, e.g., on Zenodo or figshare, and refer to them in their papers via a DOI. All artifacts for which a DOI exists that is known to the artifact evaluation committee are granted the availability badge.
Registration for VMCAI 2025
Registration for VMCAI 2025 will be handled through easyconferences.
Early registration deadline is December 20, 2024.
All papers accepted for VMCAI 2025 must have at least one registered author.
Steps for Registering
- Signup for an account in easyconferences
- The signup will send an activation link to the email address you used to register. Please click to activate.
- Use the direct link to login and register for VMCAI 2025.
Registration Rates for VMCAI 2025
The registration rates are as follows (note: Early registration ends on Dec. 20th 2024).
Type of Registration | Early Rate (USD) | Regular Rate (USD) | |
---|---|---|---|
Student | 395 | 425 | |
Regular | 535 | 635 |
Registration rate includes the cost of venue, lunch, morning and afternoon coffee breaks, the charge for easyconferences and VAT.
Cancellation Policy
Here is the cancellation policy from easyconferences. Please note that we expect at least one author to be registered for each accepted paper in the proceedings.
a. Authors and other participants may cancel their registration up to the early registration deadline. A USD 50 administrative charge will apply.
b. Registration fees are non-refundable for all cancellations after the early registration deadline has elapsed.
c. Conference Organizers will accept a substitute at no additional cost, if a registered conference participant is unable to attend the conference.
Visa Letter
Please read the information below carefully before you request a visa invitation letter.
-
You will need to register for the conference in advance of requesting your visa invitation letter.
-
To request a visa invitation letter, please send the following information:
- Name as it appears on your passport.
- Your position, affiliation and academic/professional title (eg., professor of computer science, Indian Institute of Science, Bengaluru, India).
- Passport number, country and date of issue.
- Dates you are planning to be in the USA.
- A copy of your registration for VMCAI.
- If you have an accepted paper that you are a co-authoring, please tell us the title of the paper.
- If you plan to attend POPL 2025, please let us know/include your POPL registration.
- You will need a separate invitation letter from POPL organizers, but we can mention in our letter that you are staying back for POPL.
- Please email Sriram Sankaranarayanan <srirams (AT) Colorado (DOT) edu > (substitute (AT) and (DOT) with @ and . respectively) with Subject “VMCAI 2025 visa request”.
- Please allow 2-3 business days to receive your invitation letter.
List of Accepted Papers
Regular/Case-Study Papers
- Yuncheng Wang, Puzhuo Liu, Yaowen Zheng, Dongliang Fang, Zhiwen Pan, Shuaizong Si, Weidong Zhang and Limin Sun. Automated Flaw Detection for Industrial Robot RESTful Service
- Florian Bruse. Space-Efficient Model-Checking of Higher-Order Recursion Schemes
- Jingyu Ke, Hongfei Fu, Hongming Liu, Zhouyue Sun, Liqian Chen and Guoqiang Li. Affine Disjunctive Invariant Generation with Farkas’ Lemma
- Yusen Su, Jorge Navas, Arie Gurfinkel and Isabel Garcia-Contreras. Automatic Inference of Relational Object Invariants
- Ruoxi Zhang, Richard Trefler and Kedar Namjoshi. Synthesis of Parametric Locally Symmetric Protocols from Abstract Temporal Specifications
- Jared Pincus and Eric Koskinen. An Abstract Domain for Heap Commutativity
- Muqsit Azeem, Debraj Chakraborty, Sudeep Kanav, Jan Kretinsky, Mohammadsadegh Mohagheghi, Stefanie Mohr and Maximilian Weininger. 1–2–3–Go! Policy Synthesis for Parameterized Markov Decision Processes via Decision-Tree Learning and Generalization
- Isabella Mastroeni. Abstract Local Completeness: A Local form of Abstract Non-Interference
- Kyle Storey and Eric Mercer. Property-agnostic base case extension for scalable verification of distributed systems
- Mario Bucev, Samuel Chassot, Simon Felix, Filip Schramka and Viktor Kuncak. Formally Verifiable Generated ASN.1/ACN Encoders and Decoders: A Case Study
- Nicola Assolini, Alessandra Di Pierro and Isabella Mastroeni. A Static Analysis of Entanglement
- Gustav S. Bruhns, Martin P. Hansen, Rasmus Hebsgaard, Frederik M. W. Hyldgaard and Jiri Srba. ExpectAll: A BDD Based Approach for Link Failure Resilience in Elastic Optical Networks
- Julian Erhard, Manuel Bentele, Matthias Heizmann, Dominik Klumpp, Simmo Saan, Frank Schüssele, Michael Schwarz, Helmut Seidl, Sarah Tilscher and Vesal Vojdani. Correctness Witnesses for Concurrent Programs: Bridging the Semantic Divide with Ghosts
- Benedikt Maderbacher, Felix Windisch, Alberto Larrauri and Roderick Bloem. Synthesis of Controllers for Continuous Blackbox Systems
- Junfeng Yang, Min Zhang, Xin Chen and Qin Li. Formal Verification of Probabilistic Deep Reinforcement Learning Policies with Abstract Training
- Paul Eichler, Swen Jacobs and Chana Weil-Kennedy. Parameterized Verification of Systems with Precise (0,1)-Counter Abstraction
- Devora Chait-Roth and Kedar Namjoshi. Constructing Trustworthy Smart Contracts
- Claire Dross, Joffrey Huguet and Johannes Kanig. Two-way collaboration between flow and proof in SPARK
Tool Papers
- Daisuke Ishii. A Real-Blasting Extension of Cvc5 for Reasoning about Floating-Point Arithmetic
- Gautam Muduganti, Utpal Bora, Saurabh Joshi and Ramakrishna Upadrasta. LLOR: Automated Repair of OpenMP Programs