ICTSS 2024
Wed 30 October - Fri 1 November 2024 London, United Kingdom

ICTSS Career Award


Awardee: Professor Ana Cavalli (Montimage and University of Paris VII)


Title:

Resilience Techniques for Cyber-Physical Systems and Critical Infrastructures

Summary:

In the last years an important research activity has taken place regarding the application of monitoring and testing techniques for security. Resilience has also become a crucial issue to guarantee the security and robustness of systems. In this talk, we will present our approach based on resilience and testing techniques to ensure systems cybersecurity.

Speaker’s bio:

Ana Rosa Cavalli has obtained her Doctorat d’Etat es Mathematics Science and Informatics, from the University of Paris VII, in 1984. She is now emeritus professor and works as research director in the SME Montimage. Her research interests are on specification and verification, testing methodologies for conformance and interoperability testing, active testing and monitoring techniques, the validation of security properties and their application to services and protocols. She has been the leader of the European Marie Curie network TAROT (Training and Research on Testing) and has participated or participates in several national and international projects: INTER-TRUST, CLARUS, DIAMONDS, NOTTS, HIPNQSIS, MEASURE, Marie Curie TRUST, SANCUS, VeriDevOps, AI4CYBER and DYNABIC.


Keynote Speakers


Speaker: Professor Cristian Cadar (Imperial College London)


Title:

Fuzzing Research and Practice: Advances and Open Challenges

Summary:

In this talk, I will reflect on my experiences designing and applying different forms of fuzzing (blackbox, greybox and whitebox) to test complex software systems. I will discuss the main strengths and weaknesses of different techniques, and their challenges in terms of detecting corner case bugs, finding functional correctness errors, keeping up with an accelerated pace of development, and integrating them in the broader software development process.

Speaker’s bio:

Cristian Cadar is a Professor in the Department of Computing at Imperial College London, where he leads the Software Reliability Group (http://srg.doc.ic.ac.uk/), working on automatic techniques for increasing the reliability and security of software systems. Cristian’s research has been recognised by several prestigious awards, including the IEEE TCSE New Directions Award, BCS Roger Needham Award, HVC Award, EuroSys Jochen Liedtke Award, and two test of time awards. Many of the research techniques he co-authored have been used in both academia and industry. In particular, he is maintainer and developer of the KLEE symbolic execution system, a popular system with a large user base. Cristian has a PhD in Computer Science from Stanford University, and undergraduate and Master’s degrees from the Massachusetts Institute of Technology.

Speaker: Professor Robert Hierons (The University of Sheffield)


Title:

Systematic testing for robotic systems

Summary:

Robotic systems form the basis for advances in areas such as manufacturing, healthcare, and transport. A number of areas in which robotic systems are being used are safety-critical and so there is a need for software development processes that lead to robotic systems that are safe, reliable and trusted. Testing will inevitably be an important component.

This talk will describe recent work on automated testing of robotic systems. The work is model-based: it takes as input a state-based model that describes the required behaviour of the system under test. Models are written in either RoboChart, a state-based language for robotics, or RoboSim, a simulation language for robotics. These languages have been given a formal semantics, making it possible to reason about models in a sound manner. This talk will describe how the development of robotic software can be formalised based on such languages and how this can lead to the potential to automate the generation of sound test cases. Such test cases can be used for testing within a simulation and possibly also for testing the deployed system. Testing is systematic since test cases target potential faults.

Speaker’s bio:

Rob Hierons is a Professor at the University of Sheffield. Much of his research concerns the automated generation of efficient, systematic test suites on the basis of program code, models or specifications. He has a particular interest in testing based on state-based models, typically expressed either as a form of state-machine or using a process algebra such as CSP. He has a long-standing interest in mutation testing and search-based software testing. He was joint Editor of the Journal of Software Testing, Verification, and Reliability (2011-2022) and has co-chaired several internal conferences.