EASE 2024
Tue 18 - Fri 21 June 2024 Salerno, Italy

Mark Harman – Meta & Univerity College of London

Date and Time: June 18th, 2024 – TBD
Title: The Role of Software Measurement in Assured LLM-Based Software Engineering
Abstract: Assured Large Language Model Software Engineering (Assured LLMSE) addresses the twin challenges:

  1. Ensuring LLM-generated code does not regress the properties of the original code
  2. Quantifying the improvement over the original archived by the improved code in a verifiable and measurable way.

In so doing, the Assured LLMSE approach tackles the problem of LLMs’ tendency to hallucinate, as well as providing confidence that generated code improves an existing code base. Software testing and measurement play critical roles in this improvement process: testing is the guard against regression, while measurement provides the quantifiable assurance of improvement. Assured LLMSE takes its inspiration from previous work on genetic improvement, for which software measurement also plays a central role. In this keynote we outline the Assured LLMSE approach, highlighting the role of software measurement in the provision of quantifiable, verifiable assurances for code that originates from LLM–based inference.

This is joint work with Nadia Alshahwan, Andrea Aquino, Jubin Chheda, Anastasia Finegenova, Inna Harper, Mitya Lyubarskiy, Neil Maiden, Alexander Mols, Shubho Sengupta, Rotem Tal, Alexandru Marginean, and Eddy Wang.

Bio: Mark Harman is a full-time Research Scientist at Meta Platforms in the Instagram Product Performance team, working on software engineering automation. He was previously in the Simulation-Based Testing (SBT) team at Meta, which he co-founded. The SBT team developed and deployed both the Sapienz and WW platforms for client- and server- side testing. Sapienz grew out of Majicke (a start up Mark co-founded) that was acquired by Facebook (now Meta Platforms) in 2017. Prior to working at Meta Platforms, Mark was head of Software Engineering at UCL and director of its CREST centre, where he remains a part time professor. In his more purely scientific work, he co-founded the field Search Based Software Engineering (SBSE) in 2001. He received the IEEE Harlan Mills Award and the ACM Outstanding Research Award in 2019 for his work, and was awarded a fellowship of the Royal Academy of Engineering in 2020.


Massimiliano Di Penta – University of Sannio, Italy

Date and Time: June 19th, 2024 – TBD
Title: Why Large Language Models will (not) Kill Software Engineering Research
Abstract: Over the last decade, we have witnessed a flourishing activity in the application of deep learning techniques to solve software engineering problems that were poorly addressed in the past, or not addressed at all. In this context, researchers put effort into creating specialized representations and models, hence giving a tangible, conceptual contribution beyond the simple application. With the advent of Large Language Models, such contributions were surpassed, and this was possible because big techs had the availability of data and infrastructure. As such models are pretty good at solving many software engineering problems, where would research in software engineering, and, specifically, in recommender systems go? Will artificial intelligence research kill it? Fortunately, we should not forget that software engineering is about people, and this is where I believe there will be a lot of room for novel research. Software engineering researchers have the knowledge to understand how LLMs fit (or do not fit) in a development context, by properly pondering, for example, human, ethical, and legal factors. Also, software engineering researchers have a strong empirical background to evaluate the effectiveness of such models where state-of-the-art measurements might not suffice.\


Nicole Novielli– University of Bari “Aldo Moro”, Italy

Date and Time: June 20th, 2024 – TBD
Title: Surfing the AI Wave in Software Engineering: Opportunities and Challenges
Abstract: The diffusion of generative AI, specifically Large Language Models (LLMs), is profoundly affecting Software Engineering. Thanks to their unprecedented potential for disruptive changes, which mainly reside in their ability to reduce the need for large-scale training data for new tasks and to lower the technical entry barrier, these tech- nologies offer the enormous opportunity to accelerate and enhance software engineering research and practice. Nevertheless, concerns also emerge related to the risks associated to poor results and in- discriminate use. In this evolving landscape, it becomes crucial to assess the opportunities and challenges posed by these emerging technologies. In this talk, I will reflect on the role of research in the era of AI in the hope of triggering a discussion on the shift in paradigm for empirical software engineering.\

Bio: Nicole Novielli is an Associate Professor of Computer Science at the University of Bari, Italy. Her research interests lie at the intersection of software engineering and affective computing with a specific focus on emotion mining from software repositories, natural language processing of developers’ communication traces, and biometric recognition of developers’ emotions. She was the Program co-Chair of the 19th Int. Conf. on Mining Software Repositories (MSR 2022) and the 30th IEEE Int. Conf. on Software Analysis, Evolution and Reengineering (SANER 2023). She is a member of the editorial board of the Empirical Software Engineering journal (Springer) and the Journal of Systems and Software (Elsevier).