ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Sat 20 Apr 2024 12:00 - 12:10 at Luis de Freitas Branco - Session 2: Full Papers Chair(s): Yiling Lou

Generating unit tests is a crucial task in software development, demanding substantial time and effort from programmers. The advent of Large Language Models (LLMs) introduces a novel avenue for unit test script generation. This research aims to experimentally investigate the effectiveness of LLMs, specifically exemplified by ChatGPT, for generating unit test scripts for Python programs, and how the generated test cases compare with those generated by an existing unit test generator (Pynguin). For experiments, we consider three types of code units: 1) Procedural scripts, 2) Function-based modular code, and 3) Class-based code. The generated test cases are evaluated based on criteria such as coverage, correctness, and readability. Our results show that ChatGPT’s performance is comparable with Pynguin in terms of coverage, though for some cases, its performance is superior to Pynguin. We also find that about a third of assertions generated by ChatGPT for some categories were incorrect. Our results also show that there is minimal overlap in missed statements between ChatGPT and Pynguin, thus, suggesting that a combination of both tools may enhance unit test generation performance. Finally, in our experiments, prompt engineering improved ChatGPT’s performance, achieving a much higher coverage.

Sat 20 Apr

Displayed time zone: Lisbon change

11:00 - 12:30
Session 2: Full PapersLLM4Code at Luis de Freitas Branco
Chair(s): Yiling Lou Fudan University
11:00
10m
Talk
LLM-based and Retrieval-Augmented Control Code Generation
LLM4Code
Heiko Koziolek ABB Corporate Research, Sten Grüner ABB Corporate Research, Rhaban Hark ABB Research, Virendra Ashiwal ABB Research, Sofia Linsbauer ABB Research, Nafise Eskandani ABB Corporate Research Center
Pre-print
11:10
10m
Talk
Learn to Code Sustainably: An Empirical Study on Green Code Generation
LLM4Code
Tina Vartziotis TWT Science and Innovation, National Technical University of Athens, Ippolyti Dellatolas Massachusetts Institute of Technology, George Dasoulas Harvard University, Maximilian Schmidt TWT Science and Innovation, Florian Schneider TWT Science and Innovation, Tim Hoffmann Mercedes-Benz, Sotirios Kotsopoulos National Technical University of Athens, Massachusetts Institute of Technology, Michael Keckeisen TWT Science and Innovation
11:20
10m
Talk
Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions
LLM4Code
Federico Cassano Northeastern University, Tao Li Northeastern University, Akul Sethi Northeastern University, Noah Shinn Northeastern University, Abby Brennan-Jones Wellesley College, Anton Lozhkov Hugging Face, Carolyn Jane Anderson Wellesley College, Arjun Guha Northeastern University; Roblox
Pre-print
11:30
10m
Talk
HierarchyNet: Learning to Summarize Source Code with Heterogeneous Representations
LLM4Code
Thai Minh Nguyen Monash University, Nghi D. Q. Bui Fulbright University, Viet Nam
11:40
10m
Talk
LLM-based Control Code Generation using Image Recognition
LLM4Code
Heiko Koziolek ABB Corporate Research, Anne Koziolek Karlsruhe Institute of Technology
Pre-print
11:50
10m
Talk
Translation of Low-Resource COBOL to Logically Correct and Readable Java leveraging High-Resource Java Refinement
LLM4Code
Shubham Gandhi TCS Research, Manasi Patwardhan TCS Research, Jyotsana Khatri TCS Research, Lovekesh Vig TCS Research, New Delhi, India, Raveendra Kumar Medicherla TCS Research, Tata Consultancy Services
Pre-print
12:00
10m
Talk
Unit Test Generation using Generative AI : A Comparative Performance Analysis of Autogeneration Tools
LLM4Code
Shreya Bhatia IIIT Delhi, Tarushi Gandhi IIIT Delhi, Dhruv Kumar Indraprastha Institute of Information Technology, Delhi, Pankaj Jalote IIIT Delhi
Pre-print
12:10
10m
Talk
StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of CodeBest Presentation Award
LLM4Code
Hannah McLean Babe Oberlin College, Sydney Nguyen Wellesley College, Yangtian Zi Northeastern University, Arjun Guha Northeastern University; Roblox, Molly Q Feldman Oberlin College, Carolyn Jane Anderson Wellesley College
Pre-print
12:20
10m
Talk
PromptSet: A Programmer’s Prompting Dataset
LLM4Code
Kaiser Pister Univeristy of Wisconsin-Madison, Dhruba Jyoti Paul Univeristy of Wisconsin-Madison, Ishan Joshi Univeristy of Wisconsin-Madison, Patrick Brophy Univeristy of Wisconsin-Madison
Pre-print