SEAMS 2024
Mon 15 - Tue 16 April 2024 Lisbon, Portugal
co-located with ICSE 2024

Alessandra Russo

Imperial College London

Alessandra Russo is a Professor on Applied Computational Logic, at the Department of Computing, Imperial College London, Deputy director of the UKRI Centre for Doctoral Training in “Safe and Trusted AI”, and promoter of the Imperial-X inter-disciplinary research initiative “Intelligible AI” on explainable, safe and trustworthy AI. She leads the “Structured and Probabilistic Intelligent Knowledge Engineering (SPIKE)” research group at the Department of Computing. She has pioneered several state-of-the-art symbolic machine learning systems, Including the recent state-of-the-art LAS (Learning from Answer Sets) system for learning interpretable knowledge from labelled data. More recently she has explored novel methodologies for neuro-symbolic learning that integrate machine learning and probabilistic inference with symbolic learning to support generalisation and transfer learning from multimodal unstructured data. She has published over 200 articles in flagship conferences and high impact journals in Artificial Intelligence and Software Engineering, and led various projects funded by the EPSRC, the EU and Industry.

Keynote: Advances on Symbolic Machine Learning and Recent Applications to Software Engineering

Learning interpretable models from data is one of the main challenges of AI. Symbolic Machine Learning, a field of Machine Learning, offers algorithms and systems for learning models that explain data in the context of a given domain knowledge. In contrast to statistical learning, models learned by Symbolic Machine Learning are interpretable: they can be translated into natural language and understood by humans. In this talk, I will overview our state-of-the-art symbolic machine learning system (ILASP) capable of learning different classes of models, (e.g., non-monotonic, non-deterministic and preference-based) for real-world problems, in a manner that is data efficient, scalable, and robust to noise. I will show how such system can be integrated with statistical and deep learning to provide neuro-symbolic AI solutions for learning complex interpretable knowledge from unstructured data. I will then illustrate how these advances can be applied to areas such agent learning, run-time adaptation of security for unmanned arial vehicles, and online learning of policies for explainable security.

Sun Jun

Singapore Management University

Sun Jun is currently a professor at Singapore Management University (SMU). He received Bachelor and PhD degrees in computing science from National University of Singapore (NUS) in 2002 and 2006. He has been a faculty member since 2010. He was a visiting scholar at MIT from 2011-2012. Jun’s research interests include AI safety, formal methods, program analysis and cyber-security. He is the co-founder of the PAT model checker. He has published many journal articles or peer-reviewed conference papers, many of which are published at top-tier venues. He serves as the technical consultant for multiple companies.

Keynote: Towards Always Law-Abiding Self-Driving

How should an autonomous vehicle behave on the road besides causing no accidents and reaching the destination? Fortunately, rich sets of criteria for how a vehicle should undertake a journey already exist: the various national traffic laws. In addition to avoiding collisions, an autonomous vehicle should satisfy the traffic laws of the country it operates in. Until we design new traffic laws specifically for autonomous vehicles, existing traffic laws remain the gold standard for ensuring road safety. The question is then: how do we systematically make sure that an autonomous vehicle almost always abides the traffic laws? In this work, I will introduce our recent effort on formalizing national traffic laws and use it to adaptively enforce desirable self-driving automatically.