
Dr. Jane Cleland-Huang is the Frank M. Freimann Professor and Chair of the Department of Computer Science and Engineering at the University of Notre Dame. She holds a Ph.D. in Computer Science from the University of Illinois at Chicago, and an honorary doctorate from the University of Gothenburg. Her research focuses on requirements engineering, human-machine interactions in autonomous systems, and sUAS safety assurance. She directs the Software and Requirements Engineering Research Lab where she leads the DroneResponse project, including a spin-off aimed at deploying sUAS for emergency response and disaster relief. She has served as Program Chair for several major conferences, including the IEEE Requirements Engineering Conference (2010), ESEC/FSE (2014), ICSE (2020), CAIN (2022), and SPLC (2022), as Associate Editor for IEEE Transactions on Software Engineering, and on various editorial boards. Her research has been recognized with seven ACM SIGSOFT Distinguished Paper Awards, the Mannfred Paul Award for Excellence in Software Theory and Practice, and two honorary mentions at CHI. Before entering academia, Dr. Cleland-Huang worked in refugee camps in South Asia, an experience that shaped her global perspective and strong commitment to the fundamental rights of all people to live with dignity and to reach their full potential, regardless of race, gender, or religion—principles that continue to influence her research today.
Keynote: Smart Swarms, Smarter Boundaries: Rethinking Decision Assurance in Autonomous Systems
Systems Autonomous Cyber-Physical Systems (CPS), including small Uncrewed Aerial Systems (sUAS), traditionally operate within well-defined safety envelopes, continuously monitoring runtime behavior and triggering interventions when predefined constraints are approached or violated. However, as CPS evolve to incorporate higher levels of cognition and reasoning, they are increasingly required to make complex runtime decisions that, while not necessarily posing direct safety risks, can significantly impact mission outcomes. Ensuring the integrity of these decisions in dynamic and uncertain environments is critical to advancing autonomy in a way that is both effective and trustworthy. This keynote explores the challenges of achieving mission-level decision integrity in autonomous systems and introduces the concept of decision guardrails—a framework for validating and constraining autonomous decisions in real time. Operating at a higher level of abstraction than traditional safety envelopes, decision guardrails assess assumptions, strategies, and plans both before and during execution to ensure alignment with mission objectives, human expectations, and ethical considerations. Examples drawn from DroneResponse, a multi-vehicle system of autonomous small Uncrewed Aerial Systems designed for dynamic human-partnering, demonstrate how decision guardrails can enhance mission effectiveness while maintaining safety and accountability. By reinforcing decision integrity in complex and uncertain environments, these guardrails lay the foundation for the next generation of trustworthy autonomous drone swarms.

Bashar Nuseibeh is a Professor of Computing at The Open University, an Honorary Professor at University College London (UCL), and a Visiting Professor at the National Institute of Informatics (NII), Tokyo, Japan, and University College Dublin (UCD), Ireland. Previously, he was a Professor of Software Engineering at the University of Limerick and Chief Scientist of Lero – The Irish Software Research Centre, and a Reader and then Visiting Professor of Computing at Imperial College London. Bashar’s current research interests lie at the intersection of requirements engineering, adaptive systems, and security & privacy. His research work crosses a number of discipline boundaries and has been a strong advocate of responsible software engineering informed by societal challenges and concerns.
Over the last 35 years, he has received of over €120M in peer-reviewed research funding, including a European Research Council (ERC) Advanced Grant on adaptive security and privacy, and ERC Proof of Concept grant on asset-centric adaptive protection that commercialised his research. He has also received many awards for his research and services to the community, including an ICSE most influential paper award, and best and distinguished papers at SEAMS. He has served as Editor in Chief of ACM TAAS and IEEE TSE, and as Programme Chair of ICSE and SEAMS. He also served as Chair of the ICSE Steering Committee. At ICSE this year Bashar will receive the Harlan D Mills Award, which “recognises researchers and practitioners who have demonstrated long-standing, sustained, and impactful contributions to software engineering practice and research through the development and application of sound theory”.
Bashar is a Fellow of the (UK) Royal Academy of Engineering (FREng), and the Association for Computing Machinery (ACM) and is a Member of Academia Europaea (MAE) and the Royal Irish Academy (MRIA).
For more details see https://nuseibeh.lero.ie/bashar-nuseibeh/
Keynote: Software Adaptation is Easy, Social Adaptation is Hard
“Software adaptation is easy, social adaptation is hard” – that’s not really true, is it? There are many open challenges for effective engineering of adaptive software, and much scope for research in the area. It is not easy. But I’d like to argue in this talk that these challenges are dwarfed by the challenges of engineering adaptive systems - systems that interact with the physical world and the world of people. Let’s call them socio-technical systems. On the one hand, the contextual environment in which adaptive software is developed provides the requirements and constraints within which the software must operate. And yet it can also affect that very context, requiring contextual adaptation to make best use of the software. Where do software engineers' responsibilities lie? Should they bound and focus their effort only on the development of software and its behaviour, or should their responsibility, and tools, extend into the social and physical world? And more importantly, are software engineers equipped to handle such an expanded role and responsibility. If socio-technical systems are to be truly adaptive, should the software engineer consider, specify, model, and implement physical and social adaptation? As a society, we seem comfortable adapting our physical world to accommodate technology but are much less comfortable dabbling with 'social engineering’. But are we being naive? Social engineering is being attempted all around us, should we just bite the bullet and incorporate it into our software engineering repertoire? This is controversial and hard. Discuss!