TCSE logo 
 Sigsoft logo
Sustainability badge
Sat 3 May 2025 12:00 - 12:30 at 215 - Session 2: Paper Presentation Chair(s): Christian Medeiros Adriano

Recent work has shown that Large Language Models (LLMs) are not only a suitable tool for code generation but also capable of generating annotation-based code specifications. Scaling these methodologies may allow us to deduce provable correctness guarantees for large-scale software systems. In comparison to other LLM tasks, the application field of deductive verification has the notable advantage of providing a rigorous toolset to check LLM-generated solutions. This short paper provides early results on how this rigorous toolset is best used to reliably elicit correct specification annotations from an unreliable LLM oracle.

Sat 3 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
Session 2: Paper PresentationNSE at 215
Chair(s): Christian Medeiros Adriano Hasso Plattner Institute, University of Potsdam
11:00
30m
Talk
A Graph-centric Neuro-symbolic Architecture Applied to Personalized Sepsis Treatments
NSE
Lucas Sakizloglou Brandenburg University of Technology Cottbus-Senftenberg, Taisiya Khakharova Brandenburgische Technische Universität Cottbus-Senftenberg, Leen Lambers Brandenburg University of Technology Cottbus-Senftenberg
11:30
30m
Talk
Neurosymbolic Architectural Reasoning: Towards Formal Analysis through Neural Software Architecture Inference
NSE
Steffen Herbold University of Passau, Christoph Knieke Technische Universität Clausthal, Andreas Rausch Clausthal University of Technology, Christian Schindler Institute for Enterprise Systems, University of Mannheim
12:00
30m
Talk
Next Steps in LLM-Supported Java Verification
NSE
Samuel Teuber Karlsruhe Institute of Technology, Bernhard Beckert Karlsruhe Institute of Technology
:
:
:
: