ICSE 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada
Sat 3 May 2025 14:50 - 15:00 at 214 - Paper Session 3 Chair(s): Chao Peng

LLMs have been shown to match or even exceed the performance of specialized Deep Learning models on code generation tasks for general purpose imperative languages, such as Python, Java, C++, and Rust. Conversely, there is only limited work investigating whether such impressive out of the box generalization transfers onto less ubiquitous domain-specific languages, which are often declarative, based on XML, JSON, or YAML. To bridge this gap, we explore the capabilities of LLMs for composing code automation recipes without resorting to any form of task-specific finetuning. We experiment with two GPT versions and CodeLLaMA-13b, and in our experiments, we find that after extensive prompt engineering and chain-of-thought prompting, these models performance in recipe selection is ≈ 30%. For parameter filling of YAML recipes, the performance of these models remains below ≈ 50%. However, by decomposing the task into two stages: dense retrieval and generative slot filling, and while still keeping our setup training-free, the models are able to attain a performance in a range of ≈ 50% to ≈ 67% in recipe selection, and ≈ 60% to ≈ 76% in recipe selection in parameter filling. Our study sheds light on the capabilities of LLMs in generating scripts for less widespread languages and opens up avenues for future research

Sat 3 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Paper Session 3LLM4Code at 214
Chair(s): Chao Peng ByteDance
14:00
10m
Talk
Mix-of-Language-Experts Architecture for Multilingual Programming
LLM4Code
Yifan Zong University of Waterloo, Yuntian Deng University of Waterloo, Pengyu Nie University of Waterloo
14:10
10m
Talk
Proving the Coding Interview: A Benchmark for Formally Verified Code Generation
LLM4Code
Quinn Dougherty Unaffiliated, Ronak Mehta Unaffiliated
14:20
10m
Talk
LLM-ProS: Analyzing Large Language Models’ Performance in Competitive Problem Solving
LLM4Code
Md Sifat Hossain University of Dhaka, Anika Tabassum University of Dhaka, Md. Fahim Arefin University of Dhaka, Tarannum Shaila Zaman University of Maryland Baltimore County
Media Attached
14:30
10m
Talk
Syzygy: Dual Code-Test C to (safe) Rust Translation using LLMs and Dynamic Analysis
LLM4Code
Manish Shetty University of California, Berkeley, Naman Jain University of California, Berkeley, Adwait Godbole University of California, Berkeley, Sanjit A. Seshia University of California, Berkeley, Koushik Sen University of California at Berkeley
14:40
10m
Talk
Evaluating Language Models for Computer Graphics Code Completion
LLM4Code
Jan Kels Heinrich-Heine-Universität Düsseldorf, Abdelhalim Dahou GESIS – Leibniz-Institute for the Social Sciences, Brigitte Mathiak GESIS – Leibniz-Institute for the Social Sciences
Media Attached File Attached
14:50
10m
Talk
From Zero to Sixty at the Speed of RAG: Improving YAML Recipe Generation via Retrieval
LLM4Code
Farima Farmahinifarahani J.P. Morgan AI Research, Petr Babkin J.P. Morgan AI Research, Salwa Alamir J.P. Morgan AI Research, Xiaomo Liu J.P. Morgan AI Research
15:00
10m
Talk
SC-Bench: A Large-Scale Dataset for Smart Contract Auditing
LLM4Code
Shihao Xia The Pennsylvania State University, Mengting He The Pennsylvania State University, Linhai Song The Pennsylvania State University, Yiying Zhang University of California San Diego
15:10
10m
Talk
METAMON: Finding Inconsistencies between Program Documentation and Behavior using Metamorphic LLM Queries
LLM4Code
Hyunseok Lee KAIST, Gabin An KAIST, Shin Yoo KAIST
Pre-print
15:20
10m
Talk
CWEval: Outcome-driven Evaluation on Functionality and Security of LLM Code Generation
LLM4Code
Jinjun Peng Columbia University, Leyi Cui Columbia University, Kele Huang Columbia University, Junfeng Yang Columbia University, Baishakhi Ray Columbia University