ICFP/SPLASH 2025
Sun 12 - Sat 18 October 2025 Singapore

This program is tentative and subject to change.

Wed 15 Oct 2025 14:25 - 14:40 at Orchid Small - LLMs for Code Generation Chair(s): Di Wang

C++ programming language is one of the mainstream choices for developing various systems due to its efficiency and widespread application, particularly in fields with high-performance requirements. However, C++ programs may have many memory management and security issues, such as dangling pointers and memory leaks, which pose increasing challenges in modern software development. As a modern programming language designed to address memory safety issues, Rust has gained widespread attention for its ownership system and memory safety features, driving research and practice in migrating C++ code to Rust. However, the differences in syntax and features between C++ and Rust, as well as C++’s complex and object-oriented features, make it extremely difficult to directly convert C++ code into Rust code.

With the development of large language models (LLMs), significant progress has been made in code translation and understanding. This paper aims to investigate the use of large language models to convert C++ code into Rust code by decomposing the C++ code into independent compilation units (CPP features) and extracting the dependent symbols through program analysis. We selected GPT and Deepseek for experimentation, analyzed their translation results, and investigated the root causes made by Deepseek. By manually classifying errors, we identified the root causes of translation issues and provided findings and suggestions for future research on translating C++ code into Rust code.

This program is tentative and subject to change.

Wed 15 Oct

Displayed time zone: Perth change

13:40 - 15:20
LLMs for Code GenerationLMPL at Orchid Small
Chair(s): Di Wang Peking University
13:40
15m
Talk
W2GPU: Toward WebAssembly-to-WebGPU Program Translation via Small Language Models
LMPL
Mehmet Oguz Derin Unaffiliated
Media Attached
13:55
15m
Talk
Reasoning as a Resource: Optimizing Fast and Slow Thinking in Code Generation Models
LMPL
Zongjie Li The Hong Kong University of Science and Technology, Shuai Wang Hong Kong University of Science and Technology
14:10
15m
Talk
Ranking Formal Specifications using LLMs
LMPL
Deyuan (Mike) He Princeton University, Zhendong Ang National University of Singapore, Ankush Desai Amazon Web Services, Aarti Gupta Princeton University
14:25
15m
Talk
Challenges in C++ to Rust Translation with Large Language Models: A Preliminary Empirical Study
LMPL
Yanyan Yan Nanjing University, Yang Feng Nanjing University, Qi He Nanjing University, Jun Zeng Chongqing University, Baowen Xu Nanjing University
14:40
15m
Talk
The Modular Imperative: Rethinking LLMs for Maintainable Software
LMPL
Anastasiya Kravchuk-Kirilyuk Harvard University, Fernanda Graciolli Midspiral, Nada Amin Harvard University
14:55
15m
Talk
Programming Language Techniques for Bridging LLM Code Generation Semantic Gaps
LMPL
Yalong Du Harbin Institute of Technology, Shenzhen, Chaozheng Wang The Chinese University of Hong Kong, Huaijin Wang Ohio State University