ESEIW 2024
Sun 20 - Fri 25 October 2024 Barcelona, Spain

Background: Log messages provide valuable information about the status of software systems. This information is provided in an unstructured fashion and automated approaches are applied to extract relevant parameters. To ease this process, log parsing can be applied, which transforms log messages into structured log templates. Recent advances in language models have led to several studies that apply ChatGPT to the task of log parsing with promising results. However, the performance of other state-of-the-art large language models (LLMs) on the log parsing task remains unclear. Aims: In this study, we investigate the current capability of state-of-the-art LLMs to perform log parsing. Method: Specifically, we select six recent LLMs, including both paid proprietary (GPT-3.5, Claude 2.1) and four free-to-use open models and compare their performance on system logs obtained from a selection of mature open-source projects. We design two different prompting approaches and apply the LLMs on 1, 354 log templates across 16 different projects. We evaluate their effectiveness, in the number of correctly identified templates, and the syntactic similarity between the generated templates and the ground truth. Results: Our study shows that free-to-use models are able to compete with paid models, with CodeLlama extracting 10% more log templates correctly than GPT-3.5. Moreover, we provide qualitative insights into the usability of language models (e.g., how easy it is to use their responses). Conclusions: Our findings reveal that some of the smaller, free-to-use LLMs can considerably assist log parsing compared to their paid proprietary competitors, especially the code-specialized models.

Fri 25 Oct

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

11:00 - 12:30
Large language models in software engineering IESEM Technical Papers / ESEM Emerging Results, Vision and Reflection Papers Track at Telensenyament (B3 Building - 1st Floor)
Chair(s): Phuong T. Nguyen University of L’Aquila
11:00
20m
Full-paper
Optimizing the Utilization of Large Language Models via Schedule Optimization: An Exploratory Study
ESEM Technical Papers
Yueyue Liu The University of Newcastle, Hongyu Zhang Chongqing University, Zhiqiang Li Shaanxi Normal University, Yuantian Miao The University of Newcastle
11:20
20m
Full-paper
A Comparative Study on Large Language Models for Log Parsing
ESEM Technical Papers
Merve Astekin Simula Research Laboratory, Max Hort Simula Research Laboratory, Leon Moonen Simula Research Laboratory and BI Norwegian Business School
11:40
20m
Full-paper
Are Large Language Models a Threat to Programming Platforms? An Exploratory Study
ESEM Technical Papers
Md Mustakim Billah University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Zadia Codabux University of Saskatchewan, Banani Roy University of Saskatchewan
Pre-print
12:00
15m
Vision and Emerging Results
Automatic Library Migration Using Large Language Models: First Results
ESEM Emerging Results, Vision and Reflection Papers Track
Aylton Almeida UFMG, Laerte Xavier PUC Minas, Marco Tulio Valente Federal University of Minas Gerais, Brazil
12:15
15m
Vision and Emerging Results
Evaluating Large Language Models in Exercises of UML Class Diagram Modeling
ESEM Emerging Results, Vision and Reflection Papers Track
Daniele De Bari Politecnico di Torino, Giacomo Garaccione Politecnico di Torino, Riccardo Coppola Politecnico di Torino, Marco Torchiano Politecnico di Torino, Luca Ardito Politecnico di Torino