ESEIW 2024
Sun 20 - Fri 25 October 2024 Barcelona, Spain

Large Language Models (LLM) have rapidly affirmed in the latest years as a means to support or substitute human actors in a variety of tasks. LLM agents can generate valid software models, because of their inherent ability in evaluating textual requirements provided to them in the form of prompts.

The goal of this work is to evaluate the capability of LLM agents to correctly generate UML class diagrams in activities of Requirements Modeling in the field of Software Engineering. Our aim is to evaluate LLMs in an educational setting, i.e., understanding how valuable are the results of LLMs when compared to results made by human actors, and how valuable can LLM be to generate sample solutions to provide to students.

For that purpose, we collected 20 exercises from a diverse set of web sources and compared the models generated by a human and an LLM solver in terms of syntactic, semantic, pragmatic correctness, and distance from a provided reference solution.

Our results show that the solutions generated by an LLM solver typically present a significantly higher number of errors in terms of syntactic quality and textual difference against the provided reference solution, while no significant difference is found in syntactic and pragmatic quality.

We can therefore conclude that, with a limited amount of errors mostly related to the textual content of the solution, UML diagrams generated by LLM agents have the same level of understandability as those generated by humans, and exhibit the same frequency in violating rules of UML Class Diagrams.

Fri 25 Oct

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

11:00 - 12:30
Large language models in software engineering IESEM Technical Papers / ESEM Emerging Results, Vision and Reflection Papers Track at Telensenyament (B3 Building - 1st Floor)
Chair(s): Phuong T. Nguyen University of L’Aquila
11:00
20m
Full-paper
Optimizing the Utilization of Large Language Models via Schedule Optimization: An Exploratory Study
ESEM Technical Papers
Yueyue Liu The University of Newcastle, Hongyu Zhang Chongqing University, Zhiqiang Li Shaanxi Normal University, Yuantian Miao The University of Newcastle
11:20
20m
Full-paper
A Comparative Study on Large Language Models for Log Parsing
ESEM Technical Papers
Merve Astekin Simula Research Laboratory, Max Hort Simula Research Laboratory, Leon Moonen Simula Research Laboratory and BI Norwegian Business School
11:40
20m
Full-paper
Are Large Language Models a Threat to Programming Platforms? An Exploratory Study
ESEM Technical Papers
Md Mustakim Billah University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Zadia Codabux University of Saskatchewan, Banani Roy University of Saskatchewan
Pre-print
12:00
15m
Vision and Emerging Results
Automatic Library Migration Using Large Language Models: First Results
ESEM Emerging Results, Vision and Reflection Papers Track
Aylton Almeida UFMG, Laerte Xavier PUC Minas, Marco Tulio Valente Federal University of Minas Gerais, Brazil
12:15
15m
Vision and Emerging Results
Evaluating Large Language Models in Exercises of UML Class Diagram Modeling
ESEM Emerging Results, Vision and Reflection Papers Track
Daniele De Bari Politecnico di Torino, Giacomo Garaccione Politecnico di Torino, Riccardo Coppola Politecnico di Torino, Marco Torchiano Politecnico di Torino, Luca Ardito Politecnico di Torino