ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Sat 20 Apr 2024 12:10 - 12:20 at Luis de Freitas Branco - Session 2: Full Papers Chair(s): Yiling Lou

Code LLMs have the potential to make it easier for non-experts to understand and write code. However, current CodeLLM bench- marks rely on a single expert-written prompt per problem, making it hard to generalize their success to non-expert users. In this paper, we present a new natural-language-to-code benchmark of prompts written by a key population of non-experts: beginning program- mers. StudentEval contains 1,749 prompts written by 80 students who have only completed one introductory Python course. Studen- tEval contains numerous non-expert prompts describing the same problem, enabling exploration of key factors in prompt success. We use StudentEval to evaluate 12 Code LLMs and find that Studen- tEval is a better discriminator of model performance than existing benchmarks. Our analysis of student prompting strategies reveals that nondeterministic LLM sampling can mislead students about the quality of their descriptions, a finding with key implications for Code LLMs in education.

Sat 20 Apr

Displayed time zone: Lisbon change

11:00 - 12:30
Session 2: Full PapersLLM4Code at Luis de Freitas Branco
Chair(s): Yiling Lou Fudan University
11:00
10m
Talk
LLM-based and Retrieval-Augmented Control Code Generation
LLM4Code
Heiko Koziolek ABB Corporate Research, Sten Grüner ABB Corporate Research, Rhaban Hark ABB Research, Virendra Ashiwal ABB Research, Sofia Linsbauer ABB Research, Nafise Eskandani ABB Corporate Research Center
Pre-print
11:10
10m
Talk
Learn to Code Sustainably: An Empirical Study on Green Code Generation
LLM4Code
Tina Vartziotis TWT Science and Innovation, National Technical University of Athens, Ippolyti Dellatolas Massachusetts Institute of Technology, George Dasoulas Harvard University, Maximilian Schmidt TWT Science and Innovation, Florian Schneider TWT Science and Innovation, Tim Hoffmann Mercedes-Benz, Sotirios Kotsopoulos National Technical University of Athens, Massachusetts Institute of Technology, Michael Keckeisen TWT Science and Innovation
11:20
10m
Talk
Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions
LLM4Code
Federico Cassano Northeastern University, Tao Li Northeastern University, Akul Sethi Northeastern University, Noah Shinn Northeastern University, Abby Brennan-Jones Wellesley College, Anton Lozhkov Hugging Face, Carolyn Jane Anderson Wellesley College, Arjun Guha Northeastern University; Roblox
Pre-print
11:30
10m
Talk
HierarchyNet: Learning to Summarize Source Code with Heterogeneous Representations
LLM4Code
Thai Minh Nguyen Monash University, Nghi D. Q. Bui Fulbright University, Viet Nam
11:40
10m
Talk
LLM-based Control Code Generation using Image Recognition
LLM4Code
Heiko Koziolek ABB Corporate Research, Anne Koziolek Karlsruhe Institute of Technology
Pre-print
11:50
10m
Talk
Translation of Low-Resource COBOL to Logically Correct and Readable Java leveraging High-Resource Java Refinement
LLM4Code
Shubham Gandhi TCS Research, Manasi Patwardhan TCS Research, Jyotsana Khatri TCS Research, Lovekesh Vig TCS Research, New Delhi, India, Raveendra Kumar Medicherla TCS Research, Tata Consultancy Services
Pre-print
12:00
10m
Talk
Unit Test Generation using Generative AI : A Comparative Performance Analysis of Autogeneration Tools
LLM4Code
Shreya Bhatia IIIT Delhi, Tarushi Gandhi IIIT Delhi, Dhruv Kumar Indraprastha Institute of Information Technology, Delhi, Pankaj Jalote IIIT Delhi
Pre-print
12:10
10m
Talk
StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of CodeBest Presentation Award
LLM4Code
Hannah McLean Babe Oberlin College, Sydney Nguyen Wellesley College, Yangtian Zi Northeastern University, Arjun Guha Northeastern University; Roblox, Molly Feldman Oberlin College, Carolyn Jane Anderson Wellesley College
Pre-print
12:20
10m
Talk
PromptSet: A Programmer’s Prompting Dataset
LLM4Code
Kaiser Pister Univeristy of Wisconsin-Madison, Dhruba Jyoti Paul Univeristy of Wisconsin-Madison, Ishan Joshi Univeristy of Wisconsin-Madison, Patrick Brophy Univeristy of Wisconsin-Madison
Pre-print