FSE 2025
Mon 23 - Fri 27 June 2025 Trondheim, Norway
Tue 24 Jun 2025 10:50 - 11:10 at Cosmos Hall - Code Generation 2 Chair(s): Reyhaneh Jabbarvand

Currently, large pre-trained language models are widely applied in neural code completion systems. Though large code models significantly outperform their smaller counterparts, around 70% of displayed code completions from Github Copilot are not accepted by developers. Being reviewed but not accepted, their help to developer productivity is considerably limited and may conversely aggravate the workload of developers, as the code completions are automatically and actively generated in state-of-the-art code completion systems as developers type out once the service is enabled. Even worse, considering the high cost of the large code models, it is a huge waste of computing resources and energy, which severely goes against the sustainable development principle of AI technologies. However, such waste has never been realized, not to mention effectively addressed, in the research community for neural code completion. Hence, preventing such unhelpful code completions from happening in a cost-friendly way is of urgent need. To fill this significant gap, we first investigate the prompts of unhelpful code completions, called “low-return prompts.” We empirically identify four observable patterns in low-return prompts, each lacking necessary information, making it difficult to address through enhancements to the model’s accuracy alone. This demonstrates the feasibility of identifying such low-return prompts based on the prompts themselves. Motivated by this finding, we propose an early-rejection mechanism to turn down low-return prompts by foretelling the code completion qualities. The prompts that are estimated to receive unhelpful code completions will not be sent to the model. Furthermore, we investigated five types of estimators to demonstrate the feasibility of the mechanism. The experimental results show that the estimator can reject 20% of code completion requests with a 97.4% Precision. To the best of our knowledge, it is the first systemic approach to address the problem of unhelpful code completions and this work also sheds light on an important research direction of large code models.

Tue 24 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 12:30
Code Generation 2Research Papers / Journal First at Cosmos Hall
Chair(s): Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
10:30
20m
Talk
An Empirical Study of the Non-determinism of ChatGPT in Code Generation
Journal First
Shuyin Ouyang King's College London, Jie M. Zhang King's College London, Mark Harman Meta Platforms, Inc. and UCL, Meng Wang University of Bristol
10:50
20m
Talk
Don’t Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems
Journal First
Zhensu Sun Singapore Management University, Xiaoning Du Monash University, Fu Song Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences; Nanjing Institute of Software Technology, Shangwen Wang National University of Defense Technology, Mingze Ni University of Technology Sydney, Li Li Beihang University, David Lo Singapore Management University
11:10
20m
Talk
Divide-and-Conquer: Generating UI Code from Screenshots
Research Papers
Yuxuan Wan The Chinese University of Hong Kong, Chaozheng Wang The Chinese University of Hong Kong, Yi Dong The Chinese University of Hong Kong, Wenxuan Wang Chinese University of Hong Kong, Shuqing Li The Chinese University of Hong Kong, Yintong Huo Singapore Management University, Michael Lyu Chinese University of Hong Kong
DOI
11:30
20m
Talk
LLM-based Method Name Suggestion with Automatically Generated Context-Rich Prompts
Research Papers
Waseem Akram Beijing Institute of Technology, Yanjie Jiang Peking University, Yuxia Zhang Beijing Institute of Technology, Haris Ali Khan Beijing Institute of Technology, Hui Liu Beijing Institute of Technology
DOI
11:50
20m
Talk
Beyond Functional Correctness: Investigating Coding Style Inconsistencies in Large Language Models
Research Papers
Yanlin Wang Sun Yat-sen University, Tianyue Jiang Sun Yat-sen University, Mingwei Liu Sun Yat-Sen University, Jiachi Chen Sun Yat-sen University, Mingzhi Mao Sun Yat-sen University, Xilin Liu Huawei Cloud, Yuchi Ma Huawei Cloud Computing Technologies, Zibin Zheng Sun Yat-sen University
DOI
12:10
20m
Talk
Refining ChatGPT-Generated Code: Characterizing and Mitigating Code Quality Issues
Journal First
Yue Liu Monash University, Le-Cong Thanh The University of Melbourne, Ratnadira Widyasari Singapore Management University, Singapore, Kla Tantithamthavorn Monash University, Li Li Beihang University, Xuan-Bach D. Le University of Melbourne, David Lo Singapore Management University

Information for Participants
Tue 24 Jun 2025 10:30 - 12:30 at Cosmos Hall - Code Generation 2 Chair(s): Reyhaneh Jabbarvand
Info for room Cosmos Hall:

This is the main event hall of Clarion Hotel, which will be used to host keynote talks and other plenary sessions. The FSE and ISSTA banquets will also happen in this room.

The room is just in front of the registration desk, on the other side of the main conference area. The large doors with numbers “1” and “2” provide access to the Cosmos Hall.