AST 2023
Mon 15 - Tue 16 May 2023 Melbourne, Australia
co-located with ICSE 2023

Keynotes


Future Software for Life in Trusted Futures


Abstract: How will people, other species, software and hardware live together in as yet unknown futures? How can we work towards trusted and safe futures where human values and the environment are supported by emerging technologies? Research demonstrates that human values and everyday life priorities, ethics, routines and activities will shape our possible futures. I will draw on ethnographic research to outline how people anticipate and imagine everyday life futures with emerging technologies in their homes and neighbourhoods, and how technology workers envisage futures in their professional lives. If, as social science research shows, technologies cannot solve human and societal problems, what roles should they play in future life? What are the implications for future software? What values should underpin its design? Where should it be developed? By and in collaboration with whom? What role can software play in generating the circumstances for trusted futures?

Prof. Sarah Pink (PhD, FASSA) is a design and futures anthropologist and documentary filmmaker. She is Professor and Director of the Emerging Technologies Research Lab at Monash University. Her research is interdisciplinary, and brings together scholarship and industry engagement to propose how emerging technologies will participate in possible human futures. Her recent publications include the documentaries Smart Homes for Seniors (2021) and Digital Energy Futures (2022), and books Emerging Technologies / Life at the Edge of the Future (2023), Design Ethnography (2022) and Everyday Automation (2022) and Energy Futures (2022).









The Road Toward Dependable AI Based Systems

Abstract: With the advent of deep learning, AI components have achieved unprecedented performance on complex, human competitive tasks, such as image, video, text and audio processing. Hence, they are increasingly integrated into sophisticated software systems, some of which (e.g., autonomous vehicles) are required to deliver certified dependability warranties. In this talk, I will consider the unique features of AI based systems and of the faults possibly affecting them, in order to revise the testing fundamentals and redefine the overall goal of testing, taking a statistical view on the dependability warranties that can be actually delivered. Then, I will consider the key elements of a revised testing process for AI based systems, including the test oracle and the test input generation problems. I will also introduce the notion of runtime supervision, to deal with unexpected error conditions that may occur in the field. Finally, I will identify the future steps that are essential to close the loop from testing to operation, proposing an empirical framework that reconnects the output of testing to its original goals.

Prof. Paolo Tonella is Full Professor at the Faculty of Informatics and at the Software Institute of Università della Svizzera italiana (USI) in Lugano, Switzerland. He is Honorary Professor at University College London, UK. Paolo Tonella holds an ERC Advanced grant as Principal Investigator of the project PRECRIME. He has written over 150 peer reviewed conference papers and over 50 journal papers. In 2011 he was awarded the ICSE 2001 MIP (Most Influential Paper) award, for his paper: "Analysis and Testing of Web Applications". His H-index (according to Google scholar) is 60. He is/was in the editorial board of TOSEM, TSE and EMSE. He will be Program Co-Chair of ESEC/FSE 2023. His current research interests are in software testing, in particular approaches to ensure the dependability of machine learning based systems, automated testing of cyber physical systems, and test oracle inference and improvement.







Software Engineering as the Linchpin of Responsible AI

Abstract: From humanity’s existential risks to safety risks in critical systems to ethical risks, responsible AI, as the saviour, has become a massive research challenge with significant real-world consequences. However, achieving responsible AI remains elusive despite the plethora of high-level ethical principles, risk frameworks and progress in algorithmic assurance. In the meantime, software engineering (SE) is being upended by AI, grappling with building system-level quality and alignment from inscrutable ML models and code generated from natural language prompts. The upending poses new challenges and opportunities for engineering AI systems responsibly. This talk will share our experiences in helping the industry achieve responsible AI systems by inventing new SE approaches. It will dive into industry challenges (such as risk silos and principle-algorithm gaps) and research challenges (such as lack of requirements, emerging properties and inscrutable systems) and make the point that SE is the linchpin of responsible AI. But SE also requires some fundamental rethinking - shifting from building functions to AI systems to discovering and managing emerging functions from AI systems. Only by doing so can SE take on critical new roles, from understanding human intelligence to building a thriving human-AI symbiosis.

Dr. Liming Zhu is the Research Director of Software and Computational Systems division at CSIRO's Data61. Data61 is part of The Commonwealth Scientific and Industrial Research Organisation (CSIRO) and Australia's data/AI innovation arm. The division innovates in the area of AI/ML/big data infrastructure, computational and simulation sciences platforms, trustworthy and responsible AI, distributed systems, blockchains, software ecosystems, software engineering/architecture, DevOps, quantum software, privacy and cybersecurity. He is also a conjoint full professor position at the University of New South Wales (UNSW). He is a Graduate of the Australian Institute of Company Directors. He formerly worked in several technology lead positions in software industry before obtaining a PHD degree in software engineering from UNSW. He is currently the chairperson of Standards Australia's blockchain and distributed ledger committee and on AI trustworthiness related committees. He has supervised more than 20 PhD students and taught software architecture courses at UNSW and University of Sydney. He has published more than 200 academic papers on software architecture, secure systems and data/ML infrastructure, blockchain, governance and responsible AI.