TCSE logo 
 Sigsoft logo
Sustainability badge

This program is tentative and subject to change.

Thu 1 May 2025 11:30 - 11:45 at 215 - SE for AI 2 Chair(s): Grace Lewis

Code generation is to automatically generate source code conforming to a given programming specification, which has received extensive attention especially with the development of large language models (LLMs). Due to the inherent difficulty of code generation, the code generated by LLMs may not be aligned with the specification. Although thought-eliciting prompting techniques have been proposed to enhance the code generation performance of LLMs, producing correct understanding for complicated programming problems remains challenging, resulting in unsatisfactory performance. Also, some feedback-based prompting techniques have been proposed to fix incorrect code using error messages produced by test execution. However, when the generated code deviates significantly from the ground truth, they encounter difficulties in improving performance based on such coarse-grained information.

In this work, we propose a novel prompting technique, called μFiX, to improve the code generation performance of LLMs by devising both sophisticated thought-eliciting prompting and feedback-based prompting and making the first exploration on their synergy. It first exploits test case analysis to obtain specification understanding and enables a self-improvement process to identify and refine the misunderstanding in the thought-eliciting prompting phase. μFiX further fixes the specification understanding towards the direction reducing the gap between the provided understanding (from the first phase) and the actual understanding implicitly utilized by LLMs for code generation in the feedback-based prompting phase. By improving the understanding with μFiX, the code generation performance of LLMs can be largely improved. Our evaluation on two advanced LLMs (ChatGPT and DeepSeek-Coder) with six widely-used benchmarks by comparing with 15 baselines, demonstrates the effectiveness of μFiX. For example, μFiX outperforms the most effective baseline with an average improvement of 35.62% in terms of Pass@1 across all subjects.

This program is tentative and subject to change.

Thu 1 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
SE for AI 2New Ideas and Emerging Results (NIER) / Research Track at 215
Chair(s): Grace Lewis Carnegie Mellon Software Engineering Institute
11:00
15m
Talk
Answering User Questions about Machine Learning Models through Standardized Model CardsSE for AI
Research Track
Tajkia Rahman Toma University of Alberta, Balreet Grewal University of Alberta, Cor-Paul Bezemer University of Alberta
11:15
15m
Talk
Fairness Testing through Extreme Value TheorySE for AI
Research Track
Verya Monjezi University of Texas at El Paso, Ashutosh Trivedi University of Colorado Boulder, Vladik Kreinovich University of Texas at El Paso, Saeid Tizpaz-Niari University of Illinois Chicago
11:30
15m
Talk
Fixing Large Language Models' Specification Misunderstanding for Better Code GenerationSE for AI
Research Track
Zhao Tian Tianjin University, Junjie Chen Tianjin University, Xiangyu Zhang Purdue University
Pre-print
11:45
15m
Talk
SOEN-101: Code Generation by Emulating Software Process Models Using Large Language Model AgentsSE for AI
Research Track
Feng Lin Concordia University, Dong Jae Kim DePaul University, Tse-Hsun (Peter) Chen Concordia University
12:00
15m
Talk
The Product Beyond the Model -- An Empirical Study of Repositories of Open-Source ML ProductsSE for AI
Research Track
Nadia Nahar Carnegie Mellon University, Haoran Zhang Carnegie Mellon University, Grace Lewis Carnegie Mellon Software Engineering Institute, Shurui Zhou University of Toronto, Christian Kästner Carnegie Mellon University
12:15
15m
Talk
Towards Trustworthy LLMs for Code: A Data-Centric Synergistic Auditing FrameworkSE for AI
New Ideas and Emerging Results (NIER)
Chong Wang Nanyang Technological University, Zhenpeng Chen Nanyang Technological University, Li Tianlin NTU, Yilun Zhang AIXpert, Yang Liu Nanyang Technological University
:
:
:
: