Historically, formal methods academic research and practical software development have had limited mutual interactions — except possibly in specialized domains such as safety-critical software. In recent times, the outlook has considerably improved: on the one hand, formal methods research has delivered more flexible techniques and tools that can support various aspects of the software development process — from user requirements elicitation, to design, implementation, verification and validation, as well as the creation of documentation. On the other hand, software engineering has developed a growing interest in rigorous techniques applied at scale.
The FormaliSE conference series promotes work at the intersection of the formal methods and software engineering communities, providing a venue to exchange ideas, experiences, techniques, and results. We believe more collaboration between these two communities can be mutually beneficial by fostering the creation of formal methods that are practically useful and by helping develop higher-quality software.
Originally a workshop event, since 2018 FormaliSE has been organized as a conference co-located with ICSE. The 13th edition of FormaliSE will also take place as a co-located conference of ICSE 2025.
Area of interest include:
- requirements formalization and formal specification;
- approaches, methods and tools for verification and validation;
- formal approaches to safety and security related issues;
- analysis of performance and other non-functional properties based on formal approaches;
- scalability of formal method applications
- integration of formal methods within the software development lifecycle (e.g., change -management, continuous integration, regression testing, and deployment)
- model-based engineering approaches;
- correctness-by-construction approaches for software and systems engineering;
- application of formal methods to specific domains, e.g., autonomous, cyber-physical, intelligent, and IoT systems;
- formal methods for AI-based systems (FM4AI), and AI applied in formal method approaches (AI4FM);
- formal methods in a certification context
- case studies developed/analyzed with formal approaches
- experience reports on the application of formal methods to real-world problems;
- guidelines to use formal methods in practice;
- usability of formal methods.
This program is tentative and subject to change.
Sun 27 AprDisplayed time zone: Eastern Time (US & Canada) change
07:00 - 19:00 | Ready Room SundayICSE Social, Networking and Special Rooms at 209 The Ready Room will be available throughout the week. There will be some tables with computers where people can edit presentations (bring on a USB stick) and upload presentations to the presentation rooms through the Contact 1 website. There will also be AV technicians to help if needed. You do not need to use the Ready Room: You have several choices: You can upload your presentation from your own computer in advance of your session (days in advance even) at the Contact 1 website (you will be sent a link). Or you can plug your computer in using an HDMI cable when you are starting your presentation. This last option is available but not recommended, since it increases the chance of delays. There will be some tables and couches in the Ready Room where you can get work done, or have small get-togethers with people. This room will not be ‘quiet’. If you want a quiet place to work or chill out (library quiet, no talking) then Room 209 will be available much of the time. The Ready Room will also have some poster boards. | ||
09:00 - 12:30 | Child Care Sunday AMICSE Social, Networking and Special Rooms at 102 Child Care Child Care at ICSE is free, but you must have registered for child care when you registered for the conference. If you need to add child care to your registration, please contact the registration desk. | ||
09:00 - 10:30 | |||
09:00 90mKeynote | Assuring AI in Autonomous Driving: Challenges and Emerging Approaches Research Track |
10:30 - 11:00 | |||
10:30 30mCoffee break | Sunday Morning Break ICSE Catering |
11:00 - 12:30 | Session 1 - Formal Methods and Autonomous Systems Research Track at 203 Chair(s): Divya Gopinath NASA Ames (KBR Inc.) | ||
11:00 30mTalk | CPS Falsification using Autoencoded Input Models Research Track | ||
11:30 30mTalk | Modeling Language for Scenario Development of Autonomous Driving Systems Research Track Toshiaki Aoki JAIST, Takashi Tomita JAIST, Tatsuji Kawai Kochi University, Daisuke Kawakami Mitsubishi Electric Corporation, Nobuo Chida Mitsubishi Electric Corporation | ||
12:00 30mTalk | Robustness Verification of Video Classification Neural Networks Research Track Samuel Sasaki Vanderbilt University, Preston K. Robinette Vanderbilt University, Diego Manzanas Lopez Vanderbilt University, Taylor T Johnson Vanderbilt University |
12:30 - 14:00 | |||
12:30 90mLunch | Sunday Lunch ICSE Catering |
14:00 - 15:30 | Session 2 – Theorem Proving and Probabilistic Model CheckingResearch Track at 203 Chair(s): Nancy Day University of Waterloo, Canada | ||
14:00 30mTalk | The Burden of Proof: Automated Tooling for Rapid Iteration on Large Mechanised Proofs Research Track Chengsong Tan Imperial College London, Kaihong, Alastair F. Donaldson Imperial College London, Jonathan Julian Huerta y Munive Czech Technical University, John Wickerson Imperial College London | ||
14:30 30mTalk | A proof-based ground algebraic meta-model for reasoning on ASTD in Event-B Research Track Christophe Chen INPT-ENSEEIHT/IRIT, Peter Rivière INPT-ENSEEIHT / IRIT, University of Toulouse, France, Neeraj Kumar Singh INPT-ENSEEIHT/IRIT, Guillaume Dupont INPT–ENSEEIHT, Yamine Ait Ameur IRIT/INPT-ENSEEIHT, Marc Frappier Université de Sherbrooke, Canada | ||
15:00 30mTalk | Probabilistic Model Checking of Disaster Resource Distribution Strategies Research Track |
15:30 - 16:00 | |||
15:30 30mBreak | Sunday Afternoon Break ICSE Catering |
16:00 - 17:30 | Session 3 - Temporal Logic and ContractsResearch Track at 203 Chair(s): Domenico Bianculli University of Luxembourg | ||
16:00 30mTalk | Specifying Distributed Hash Tables with Allen Temporal Logic Research Track Nuno Policarpo Instituto Superior Técnico, University of Lisbon, José Fragoso Santos INESC-ID/Instituto Superior Técnico, Portugal , Alcino Cunha University of Minho; INESC TEC, João Leitão NOVA LINCS & FCT, NOVA University of Lisbon, Pedro Ákos Costa NOVA LINCS & DI-FCT-UNL | ||
16:30 30mTalk | Temporal Logics Meet Real-World Software Requirements: A Reality Check Research Track Roman Bögli University of Bern, Atefeh Rohani University of Bern, Thomas Studer University of Bern, Christos Tsigkanos University of Athens, Greece, Timo Kehrer University of Bern | ||
17:00 30mTalk | Detecting Redundant Preconditions Research Track Nicola Thoben University of Oldenburg, Heike Wehrheim Carl von Ossietzky Universität Oldenburg / University of Oldenburg |
Mon 28 AprDisplayed time zone: Eastern Time (US & Canada) change
07:00 - 19:00 | Ready Room MondayICSE Social, Networking and Special Rooms at 209 The Ready Room will be available throughout the week. There will be some tables with computers where people can edit presentations (bring on a USB stick) and upload presentations to the presentation rooms through the Contact 1 website. There will also be AV technicians to help if needed. You do not need to use the Ready Room: You have several choices: You can upload your presentation from your own computer in advance of your session (days in advance even) at the Contact 1 website (you will be sent a link). Or you can plug your computer in using an HDMI cable when you are starting your presentation. This last option is available but not recommended, since it increases the chance of delays. There will be some tables and couches in the Ready Room where you can get work done, or have small get-togethers with people. This room will not be ‘quiet’. If you want a quiet place to work or chill out (library quiet, no talking) then Room 209 will be available much of the time. The Ready Room will also have some poster boards. | ||
09:00 - 10:30 | |||
09:00 - 10:30 | |||
09:00 90mKeynote | Adversarial Perturbations and Self-Defenses for Large Language Models on Coding Task Research Track |
10:30 - 11:00 | |||
10:30 30mBreak | Monday Morning Break ICSE Catering |
11:00 - 12:30 | |||
11:00 - 12:30 | Session 4 – Generative AI and Fuzzy LogicResearch Track at 203 Chair(s): Lina Marsso École Polytechnique de Montréal | ||
11:00 30mTalk | LLM-based Generation of Weakest Preconditions and Precise Array Invariants Research Track | ||
11:30 30mTalk | VeCoGen: Automating Generation of Formally Verified C Code with Large Language Models Research Track Merlijn Sevenhuijsen Scania CV, AB & KTH Royal Institute of Technology, Khashayar Etemadi KTH Royal Institute of Technology, Mattias Nyberg Scania CV, AB & KTH Royal Institute of Technology | ||
12:00 30mTalk | Embracing Uncertainty: A Fuzzy Theoretical Model for Goal Fulfillment Assessment Research Track Vincenzo Grassi University of Roma "Tor Vergata", Raffaela Mirandola Karlsruhe Institute of Technology (KIT), Diego Perez-Palacin Linnaeus University |
12:30 - 14:00 | |||
12:30 90mLunch | Monday Lunch ICSE Catering |
14:00 - 15:30 | |||
14:00 - 15:30 | Session 5 - Formal Specification and Verification:Research Track at 203 Chair(s): Toshiaki Aoki JAIST | ||
14:00 30mTalk | Refining Alloy-Based Mutation Operators to Reflect Common Mistakes Research Track Ana Jovanovic The University of Texas at Arlington, Mohammad Nurullah Patwary The University of Texas at Arlington, Allison Sullivan University of Texas at Arlington | ||
14:30 30mTalk | Verifying Multiple TLA+ Configurations with Blast Research Track | ||
15:00 30mTalk | Typestates Specification and Verification in Frama-C Research Track Sébastien Patte Université Paris-Saclay, CEA, List, Virgile Prevosto Université Paris-Saclay, CEA, List |
15:30 - 16:00 | |||
15:30 30mBreak | Monday Afternoon Break ICSE Catering |
16:00 - 17:30 | |||
Accepted Papers
Call for Papers
We accept papers in three categories:
- Full research papers describing original research work and results. We encourage authors to include validation of their contributions by means of a case study or experiments. We also welcome research papers focusing on tools and tool development.
- Case study papers discussing a significant application that suggests general lessons learned and motivates further research, or empirically validates theoretical results (such as a technique’s scalability).
- Research ideas papers describing new ideas in preliminary form, in a way that can stimulate interesting discussions at the conference, and suggest future work.
All papers submitted to the FormaliSE 2025 conference must be written in English, must be unpublished original work, and must not be under review or submitted elsewhere at the time of submission. Submissions must comply with the FormaliSE’s lightweight double-anonymous review process (see below).
Full research papers and case study papers can take up to 10 pages including all text, figures, tables and appendices, but excluding references. Research ideas papers can take up to 4 pages, plus up to 1 additional page solely for references.
To avoid that authors waste time fitting their papers into the stated limit at the expense of presentation clarity, paper lengths slightly exceeding the stated limit will still be considered, provided that the reviewers find that the presentation is of high quality.
All submissions must be in PDF format and must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines (i.e., title in 24pt font and full text in 10pt type).
In LaTeX, use \documentclass[10pt,conference]{IEEEtran}
without including the compsoc
or compsocconf
options.
Note that IEEE format is being used this year, whereas last year it was ACM format, hence the appearance will differ from year to year.
To submit a paper to FormaliSE 2025 use the following HotCRP link.
Lightweight Double-Anonymous Review Process for Papers
As in recent editions, FormaliSE 2025 will use a lightweight double-anonymous process. Authors must omit their names and institutions from the title page, cite their own work in the third person, and omit acknowledgments that may reveal their identity or affiliation. The purpose is reducing chances of reviewer bias influenced by the authors’ identities. The double-anonymous process is, however, lightweight, which means that it should not pose a heavy burden for authors, nor should make a paper’s presentation weaker or more difficult to review. Also, advertising the paper as part of your usual research activities (for example, on your personal web-page, in a pre-print archive, by email, in talks or discussions with colleagues) is permitted without penalties.
PAPER SELECTION
Each paper will be reviewed by at least three program committee members that will judge its overall quality in terms of its soundness, significance, novelty, verifiability, and presentation clarity.
FormaliSE 2025 will adopt a lightweight response process: if all the reviewers of a given paper agree that a clarification from the authors regarding a specific question could move the paper from “borderline” to “accept”, the chairs will relay the reviewers’ questions to the authors by email, and then share their reply with the reviewers in HotCRP. The goal of lightweight responses is reducing the chance of random decisions on borderline papers. Hence, they will only be used for a minority of submissions; most papers will not require such an author response. Nevertheless, we would ask the corresponding authors of all submissions to make sure that they are available to answer questions by email upon request.
PUBLICATION
All accepted papers are published as part of the ICSE 2025 Proceedings in the ACM and IEEE Digital Libraries.
At least one author of each accepted paper is required to register for the conference and present the paper at the conference — physically or, if the circumstances do not allow so, virtually. Failure to register an author will result in a paper being removed from the proceedings.
Keynotes
Adversarial Perturbations and Self-Defenses for Large Language Models on Coding Task
Abstract: Large language models (LLMs) have demonstrated impressive capabilities for coding tasks including writing and reasoning about code. They improve upon previous neural network models of code that already demonstrated competitive results when performing tasks such as code summarization and identifying code vulnerabilities . However, it is known that these pre-LLM code models are vulnerable to adversarial examples, i.e. small syntactic perturbations that do not change the program’s semantics, such as the inclusion of “dead code” through false conditions, the addition of inconsequential print statements, or change in control flow, designed to “fool” the models. LLMs can also be vulnerable to the same adversarial perturbations. In this talk we discuss the effect of adversarial perturbations on coding tasks with LLMs and propose effective defenses against such adversaries. The coding tasks we’ll consider include both classification (e.g., use LLMs for summarization, vulnerability detection) and code generation (e.g., use LLMs for code completion, based on prompts plus code snippets).
Corina Pasareanu is an ACM Fellow and an IEEE ASE Fellow, working at NASA Ames. She is affiliated with KBR and Carnegie Mellon University's CyLab. Her research interests include model checking, symbolic execution, compositional verification, probabilistic software analysis, autonomy, and security. She is the recipient of several awards, including ETAPS Test of Time Award (2021), ASE Most Influential Paper Award (2018), ESEC/FSE Test of Time Award (2018), ISSTA Retrospective Impact Paper Award (2018), ACM Impact Paper Award (2010), and ICSE 2010 Most Influential Paper Award (2010).
Assuring AI in Autonomous Driving: Challenges and Emerging Approaches
Abstract: Artificial intelligence plays a critical role in autonomous driving, yet it introduces significant assurance challenges. Unlike traditional software, AI lacks explicit specifications and is applied to inherently uncertain, complex, and open problems such as road environment perception. Furthermore, modern AI systems are often opaque, making interpretability and verification difficult. These issues complicate the development of assurance methods that can ensure safety and reliability.
In this talk, I will review current approaches and industry standards for AI assurance in autonomous driving, including scenario-based testing and uncertainty estimation. I will discuss assurance methods for both hybrid systems — where AI functions as a component within a classical software stack — and end-to-end AI-driven systems. Finally, I will explore emerging directions for handling edge-case scenarios, including the use of foundation models and dual-processing architectures as potential solutions for enhancing the safety and robustness of end-to-end AI systems.
Krzysztof Czarnecki is a Professor of Electrical and Computer Engineering and a University Research Chair at the University of Waterloo, where he leads the Waterloo Intelligent Systems Engineering (WISE) Laboratory. He currently also serves as an Associate Director of the Waterloo Centre for Automotive Research (WatCAR). His research focuses on assuring the safety of AI systems and driving behavior. In 2018, he co-led the development of the first autonomous vehicle tested on public roads in Canada. He has made significant contributions to automotive AI and software safety standards, including SAE J3164 and ISO 8800. Before joining the University of Waterloo, he worked at DaimlerChrysler Research in Germany (1995-2002), where he advanced software development practices and technologies for enterprise, automotive, and aerospace sectors. His work has been recognized with numerous awards, including the Premier's Research Excellence Award (2004) and the British Computing Society’s Upper Canada Award for Outstanding Contributions to the IT Industry (2008). He has also received twelve Best Paper Awards, two ACM Distinguished Paper Awards, and five Most Influential Paper Awards.