ESEIW 2025
Sun 28 September - Fri 3 October 2025

Software migration across programming languages is a critical yet labor-intensive task, often requiring deep code understanding and manual intervention. In this study, we develop a fully automated agent for end-to-end code translation and validation. First, we generate code comments from Java source code using various large language models (LLMs) to enhance code comprehension and facilitate cross-language translation. Second, leveraging these AI-generated comments, we automati- cally generate equivalent C# code, demonstrating the potential of AI in software migration and interoperability. Third, we complete both Java and generated C# code and prepare them to execute. Fourth, we apply automated unit testing to assess functional correctness and ensure the reliability of AI-generated code. Our results show that a fully automated LLM agent may effectively bridge programming languages with minimal human input. This approach opens new possibilities for scalable, AI-driven software modernization and cross-platform development. We recommend that such an LLM agent should be used to support human experts during the generation of reliable and correct code.

Fri 3 Oct

Displayed time zone: Hawaii change

14:00 - 15:20
LLMs for Code Generation, Translation, and MaintainabilityESEM - Technical Track / ESEM - Emerging Results and Vision Track / at Kaiulani I
Chair(s): Ivan Machado Federal University of Bahia - UFBA
14:00
20m
Talk
A Fully Automated Agent for End-to-End Code Translation and Validation
ESEM - Emerging Results and Vision Track
Eray Erer Boğaziçi University, Ayşe Başar Toronto Metropolitan University, Toronto, Canada, Aysun Bozanta Bogazici University, Turgay Aytac Comunale Capital
14:20
20m
Talk
Contextual Code Retrieval for Commit Message Generation: A Preliminary Study
ESEM - Emerging Results and Vision Track
Bo Xiong Wuhan University, Linghao Zhang Wuhan University, Chong Wang Wuhan University, Peng Liang Wuhan University, China
Pre-print
14:40
20m
Talk
How Small is Enough? Empirical Evidence of Quantized Small Language Models for Automated Program Repair
ESEM - Emerging Results and Vision Track
Kazuki Kusama , Honglin Shu Kyushu University, Masanari Kondo Kyushu University, Yasutaka Kamei Kyushu University
15:00
20m
Talk
Is LLM-Generated Code More Maintainable & Reliable than Human-Written Code?
ESEM - Technical Track
Alfred Santa Molison Toronto Metropolitan University, Fabio Marcos De Abreu Santos Colorado State University, USA, Marcia Moraes Colorado State University, Glaucia Melo Toronto Metropolitan University, Wesley Assunção North Carolina State University