EASE 2024
Tue 18 - Fri 21 June 2024 Salerno, Italy

This study evaluates the efficiency of code generation by Large Language Models (LLMs) and measures their performance against human-crafted solutions using a dataset from Leetcode. We compare 18 LLMs, considering factors such as model temperature and success rate, and their impact on code performance. The research introduces a novel method for measuring and comparing the speed of LLM- generated code, revealing that LLMs produce code with comparable performance, irrespective of the model used. We also find that LLMs are capable of generating code that is, on average, more efficient than the code written by humans. The paper further discusses the use of Leetcode as a benchmarking dataset, the limitations imposed by potential data contamination, and the platform’s measurement reliability. Our findings contribute to a better understanding of LLM capabilities in code generation and set the stage for future optimizations in the field.

Thu 20 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

14:00 - 15:25
Artificial Intelligence for Software EngineeringIndustry / Research Papers / Short Papers, Vision and Emerging Results at Room Capri
Chair(s): Sridhar Chimalakonda Indian Institute of Technology, Tirupati, Klaus Schmid University of Hildesheim
14:00
15m
Talk
A Performance Study of LLM-Generated Code on Leetcode
Research Papers
Tristan Coignion , Clement Quinton University of Lille, Inria, Romain Rouvoy Univ. Lille / Inria / CNRS
Pre-print
14:15
15m
Talk
How Much Logs Does My Source Code File Need? Learning to Predict the Density of Logs
Research Papers
Mohamed Amine Batoun École de Technologie Supérieure, Mohammed Sayagh ETS Montreal, University of Quebec, Ali Ouni ETS Montreal, University of Quebec
14:30
15m
Talk
The Promise and Challenges of using LLMs to Accelerate the Screening Process of Systematic Reviews
Research Papers
Aleksi Huotala University of Helsinki, Miikka Kuutila Dalhousie University, Paul Ralph Dalhousie University, Mika Mäntylä University of Helsinki and University of Oulu
Link to publication DOI Pre-print
14:45
15m
Talk
AI-enabled efficient PVM performance monitoring
Industry
Mario Veniero Independent Researcher, Davide Varriale MEDIACOM SRL
DOI
15:00
15m
Talk
Automated evaluation of game content display using deep learning
Industry
Ciprian Paduraru University of Bucharest, Marina Cernat University of Bucharest, Alin Stefanescu University of Bucharest
15:15
10m
Talk
Automated categorization of pre-trained models in software engineering: A case study with a Hugging Face dataset
Short Papers, Vision and Emerging Results
Claudio Di Sipio University of L'Aquila, Riccardo Rubei University of L'Aquila, Juri Di Rocco University of L'Aquila, Davide Di Ruscio University of L'Aquila, Phuong T. Nguyen University of L’Aquila
Pre-print