MODELS 2026 (series) /
Keynotes
David Lo
Singapore Management University, Singapore
Wednesday October 7th
From Synthesis to Safeguards: The Symbiosis of Models and LLMs
The rapid ascent of Large Language Models (LLMs) has introduced what many perceive as a pivotal crossroads for software engineering: a choice between the unprecedented velocity of generative AI and the rigorous reliability of formal modeling. In this talk, I argue that the future of the field lies in a symbiotic "neuro-symbolic" relationship where these two paradigms are no longer viewed as alternatives, but as essential partners. We first examine an emerging shift in the modeling lifecycle: the transition from manual, high-overhead specification to AI-assisted synthesis. By using LLMs as catalysts to automatically generate high-fidelity artifacts, we can mitigate the modeling overhead that has historically been a challenge for static analysis and formal verification. Conversely, we explore the indispensable role of models as the "safety brain" for generative AI. Formal reasoning and program invariants can be leveraged to create runtime guardrails, transforming stochastic AI outputs into more verifiable, reliable systems. Through several case studies, this talk aims to provide a call-to-arms to realize a future where the scale and intuition of LLMs work in tandem with the precision and authority of models to define the next generation of trustworthy software engineering.
Biography
David Lo is the VP of Research Designate, OUB Chair Professor of Computer Science, and Founding Director of the Center for Research in Intelligent Software Engineering (RISE) at Singapore Management University. Championing the area of AI for Software Engineering (AI4SE) since the mid-2000s, he has demonstrated how AI — encompassing data mining, machine learning, information retrieval, natural language processing, and search-based algorithms — can transform software engineering data into actionable insights and automation. Through empirical studies, he has also identified practitioners' pain points, characterized the limitations of AI4SE solutions, and explored practitioners' acceptance thresholds for AI-powered tools. His contributions have led to over 20 awards, including four Test-of-Time awards and seventeen ACM SIGSOFT/IEEE TCSE Distinguished Paper awards, and his work has garnered over 48,000 citations. An ACM Fellow, IEEE Fellow, ASE Fellow, and National Research Foundation Investigator (Senior Fellow), Lo has also served as a PC Co-Chair for ASE'20, FSE'24, and ICSE'25.
Steven Kelly
MetaCase, Finland
Thursday October 8th
What Modelling will be Relevant in 2056?
With GenAI able to do so much in building software, our field is facing an upheaval and questions on many levels. By looking at earlier revolutions in software development, we can identify potential answers: things that have weathered these previous storms and remained unchanged. Key invariants we will examine include human cognition, the power of language, the astonishing growth of computing power, the tyranny of scale, and the similarities and differences of human and machine. By lifting our eyes from extraneous details to the big picture, and delving deeper into the unchanging foundations of our endeavours, we can plot a well-founded course towards 2056.
Biography
Dr. Steven Kelly is the CTO of MetaCase and co-founder of the DSM Forum. He has over thirty years of experience of building tools for Domain-Specific Modeling and consulting in some of the largest companies globally to improve productivity by a factor of 5–10.
As architect and lead developer of MetaEdit+, MetaCase’s domain-specific modeling tool, he has been privileged to receive awards from the Association of Information Systems, SD Times, Byte, and the Finnish President. He is author of a book on DSM and over 100 articles, most recently in journals such as SoSyM and IEEE Software, and a regular speaker at events such as MODELS, SPLASH and STAF.
He has an M.A. in Mathematics and Computer Science from the University of Cambridge, and a Ph.D. from the University of Jyväskylä. His computer education began with machine code, Assembler and BASIC, and came to rest in Smalltalk.
As architect and lead developer of MetaEdit+, MetaCase’s domain-specific modeling tool, he has been privileged to receive awards from the Association of Information Systems, SD Times, Byte, and the Finnish President. He is author of a book on DSM and over 100 articles, most recently in journals such as SoSyM and IEEE Software, and a regular speaker at events such as MODELS, SPLASH and STAF.
He has an M.A. in Mathematics and Computer Science from the University of Cambridge, and a Ph.D. from the University of Jyväskylä. His computer education began with machine code, Assembler and BASIC, and came to rest in Smalltalk.
Sira Vegas
Technical University of Madrid, Spain
Friday October 9th
The Fragility of Software Engineering Experiments: What We Fail to See
Controlled experiments are a cornerstone of software engineering research, yet the reliability of their conclusions depends critically on how they are designed, executed, and analyzed. In this keynote, I will examine common pitfalls that undermine the credibility of software engineering experiments, focusing on key dimensions of validity, including internal, construct, and statistical conclusion validity.
Drawing on examples, I will show how seemingly reasonable experimental choices—ranging from variable operationalization to subject selection, experimental design and data analysis—can lead to misleading or overconfident conclusions. These challenges arise across different kinds of experiments, from those involving human participants to those based on complex artifacts and toolchains, where small design decisions can have large, often unnoticed effects on results.
Building on these observations, I will discuss practical strategies for designing and evaluating experiments that produce more trustworthy and informative results. The goal is not only to avoid common pitfalls, but to strengthen the scientific contribution of software engineering experiments and support more robust and cumulative progress in the field.
Biography
Sira Vegas is Full Professor of Software Engineering at the Universidad Politécnica de Madrid, Spain. Her research focuses on experimental software engineering, AI for software engineering (AI4SE), and software testing. She has published over 100 articles in international journals and conferences, participated in national and international research projects, and collaborated with leading researchers worldwide. She serves regularly on program committees and as a reviewer for top venues such as IEEE TSE, ACM TOSEM, EMSE, ICSE, ESEM, MSR and ASE. Sira has co-chaired several international conferences (ESEM, EASE, PROFES, ICSE-DS, ICPC-RENE, SANER-JF) and currently chairs the ISERN Steering Committee. She holds a Ph.D. from UPM and has been a visiting researcher at the University of Maryland, Fraunhofer IESE, University of Oulu, and Tongji University.