STAF 2024
Mon 8 - Thu 11 July 2024 Enschede, Netherlands

This program is tentative and subject to change.

Tue 9 Jul 2024 12:00 - 12:30 at Waaier 2 - ECMFA Session 4

Metamodels play an important role in MDE and in specifying a software language. They are cornerstone to generate other artifacts of lower abstraction level, such as code. Developers then enrich the generated code to build their language services and tooling, e.g., editors, and checkers. When a metamodel evolves, part of the code is regenerated and all the additional developers’ code can be impacted. Thus, requiring its errors to be co-evolved accordingly. In this paper, we explore a novel approach to mitigate the challenge of metamodel evolution impacts on the code using LLMs. In fact LLMs stand as promising tools for tackling increasingly complex problems and support developers in various tasks of writing, correcting and documenting source code, models, and other artifacts. However, while there is an extensive empirical assessment of the LLMs capabilities in generating models, code and tests, there is a lack of work on their ability to support their maintenance. In this paper, we focus on the particular problem of metamodels and code co-evolution. We first designed a prompt template structure that contains contextual information about metamodel changes, the abstraction gap between the metamodel and the code, and the erroneous code to co-evolve. To investigate the usefulness of this template, we generated three more variations of the prompts. The generated prompts are then given to the LLM to co-evolve the impacted code. We evaluated our generated prompts and other three of their variations with ChatGPT version 3.5 on seven Eclipse projects from OCL and Modisco evolved metamodels. Results show that ChatGPT can co-evolve correctly 88.7 % of the errors due to metamodel evolution, varying from 75% to 100% of correctness rate. When varying the prompts, we observed increased correctness in two variants and decreased correctness in another variant. We also observed that varying the temperature hyperparameter yields better results with lower temperatures. Our results are observed on a total of 5320 generated prompts. Finally, when compared to the quick fixes of the IDE, the generated prompts co-evolutions completely outperform the quick fixes.

This program is tentative and subject to change.

Tue 9 Jul

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:30
ECMFA Session 4ECMFA at Waaier 2
11:00
30m
Research paper
Towards a Semantically Useful Definition of Conformance with a Reference Model
ECMFA
A: Marco Konersmann , A: Bernhard Rumpe RWTH Aachen University, A: Max Stachon RWTH Aachen University, A: Sebastian Stüber RWTH Aachen University, Chair of Software Engineering, A: Valdes Voufo RWTH Aachen University
11:30
30m
Research paper
Integrating the Support for Machine Learning of Inter-Model Relations in Model Views
ECMFA
A: James Pontes Miranda IMT Atlantique, LS2N (UMR CNRS 6004), A: Hugo Bruneliere IMT Atlantique, LS2N (UMR CNRS 6004), A: Massimo Tisi IMT Atlantique, LS2N (UMR CNRS 6004), A: Gerson Sunyé IMT Atlantique; Nantes Université; École Centrale Nantes
12:00
30m
Research paper
An Empirical Study on Leveraging LLMs for Metamodels and Code Co-evolution
ECMFA
A: Zohra Kaouter Kebaili Univ Rennes, CNRS, IRISA, A: Djamel Eddine Khelladi CNRS, IRISA, University of Rennes, A: Mathieu Acher University of Rennes, France / Inria, France / CNRS, France / IRISA, France, A: Olivier Barais University of Rennes, France / Inria, France / CNRS, France / IRISA, France

Information for Participants
Tue 9 Jul 2024 11:00 - 12:30 at Waaier 2 - ECMFA Session 4
Info for room Waaier 2:

image