Write a Blog >>
ESEM 2021
Mon 11 - Fri 15 October 2021

How Empirical Research Supports Tool Development: A Retrospective Analysis and new Horizons

When: Tuesday, Oct 12

Shared with SSBSE

Massimiliano Di Penta

University of Sannio, Italy

Profile picture of Massimiliano Di Penta

Massimiliano Di Penta is a full professor at the University of Sannio, Italy. His research interests include software maintenance and evolution, mining software repositories, empirical software engineering, search-based software engineering, and software testing. He is an author of over 300 papers that appeared in international journals, conferences, and workshops. He has received several awards for research and service, including four ACM SIGSOFT Distinguished paper awards. He serves and has served in the organizing and program committees of more than 100 conferences, including ICSE, FSE, ASE, and ICSME. Among others, he has been program co-chair of ASE 2017, ESEC/FSE 2021, and will be program co-chair of ICSE 2023. He is co-editor in chief of the Journal of Software: Evolution and Processes edited by Wiley, editorial board member of ACM Transactions on Software Engineering and Methodology and Empirical Software Engineering Journal edited by Springer, and has served the editorial board of the IEEE Transactions on Software Engineering.

Abstract

Empirical research provides two-fold support to the development of approaches and tools aimed at supporting software engineers. On the one hand, empirical studies help to understand a phenomenon or a context of interest. On the other hand, studies compare approaches and evaluate how software engineers benefit from them. Over the past decades, there has been a tangible evolution in how empirical evaluation is conducted in software engineering. This is due to multiple reasons. On the one hand, the research community has matured a lot thanks also to guidelines developed by several researchers. On the other hand, the large availability of data and artifacts, mainly from the open-source, has made it possible to conduct larger evaluations, and in some cases to reach study participants. In this keynote, I will first overview how empirical research has been used over the past decades to evaluate tools, and how this is changing over the years. I will also emphasize the importance of combining quantitative and qualitative evaluations, and how sometimes depth turns out to be more useful than just breadth. I will also emphasize how research is not a straightforward path, and negative results are often an essential component for future advances. Last, but not least, I will emphasize how the role of empirical evaluation is changing with the pervasiveness of artificial intelligence methods in software engineering research.


Measurement Challenges for Cyber-Cyber Digital Twins: Experiences from the Deployment of Facebook’s WW Simulation System

When: Friday, Oct 15

Maria Lomeli and Mark Harman

Facebook, UK

Profile picture of Maria Lomeli

Maria Lomeli is currently a Software Engineer in the London Probability team at Facebook, working on the WW cyber-cyber digital twin. Previously, she was a senior research scientist at Babylon Health, UK, and a postdoctoral researcher in Machine Learning, at University of Cambridge. She was awarded a PhD in Statistical Machine Learning from the Gatsby Unit, University College London. Her scientific work combines aspects of Software Engineering and Machine Learning at scale, and has been published in the leading venues on both machine learning, such as ICML and NeurIPS and on Software Engineering, such as ICSE. She has given over 30 talks internationally on her scientific work.

Profile picture of Mark Harman

Mark Harman is a full-time Research Scientist in the London Probability team at Facebook, working on the WW cyber-cyber digital twin. Mark is also a part-time professorship at UCL and was previously the manager of Facebook's Sapienz team team, which grew out of Majicke, a start up co-founded by Mark and acquired by Facebook in 2017. The Sapienz tech has been fully deployed as part of Facebook’s overall CI system since 2017 and the Facebook Sapienz continues to develop and extend it. Sapienz has found and helped to fix thousands of bugs before they hit production, on systems of tens of millions of lines of code, used by over 2.8 billion people world wide every day. Prior to working at Facebook, Mark was head of Software Engineering at UCL and director of its CREST centre. In his more purely scientific work, Mark co-founded the field Search Based Software Engineering (SBSE) in 2001, now the subject of active research in over 40 countries worldwide. He received the IEEE Harlan Mills Award and the ACM Outstanding Research Award in 2019 for his work and was awarded a fellowship of the Royal Academy of Engineering in 2020.

Abstract

This talk concerns measurement of software systems built as cyber-cyber digital twins. A cyber-cyber digital twin is a deployed software model that executes in tandem with the system it simulates, contributing to, and drawing from, that system’s behaviour. This talk outlines Facebook’s cyber-cyber digital twin, WW, a twin of Facebook’s WWW platform, built using Web-Enabled Simulation. The talk will focus on research challenges and opportunities in the area of measurement. Measurement challenges lie at the heart of modern simulation, directly impacting how we use simulation outcomes to make automated online and semi-automated offline decisions, and how we verify and validate those outcomes. As modern simulation systems increasingly become more like cyber-cyber digital twins, thereby moving from manual to automated decision making, these measurement challenges acquire ever greater significance.

This talk reports the results of joint work by John Ahlgren, Maria Eugenia Berezin, Kinga Bojarczuk, Sophia Drossopoulou, Elena Dulskyte, Inna Dvortsova, Johann George, Natalija Gucevska, Mark Harman, Ralf Laemmel, Maria Lomeli, Simon Lucas, Steve Omohundro, Erik Meijer, Rubmary Rojas, Silvia Sapora, Justin Spahr-Summers, and Jie Zhang.