FSE 2025
Mon 23 - Fri 27 June 2025 Trondheim, Norway
co-located with ISSTA 2025
Tue 24 Jun 2025 11:30 - 11:50 at Cosmos Hall - Code Generation 2 Chair(s): Reyhaneh Jabbarvand

Accurate method naming is crucial for code readability and maintainability. However, manually creating concise and meaningful names remains a significant challenge. To this end, in this paper, we propose an approach based on Large Language Model (LLMs) to suggest method names according to function descriptions. The key of the approach is ContextCraft, an automated algorithm for generating context-rich prompts for LLM that suggests the expected method names according to the prompts. For a given query (functional description), it retrieves a few best examples whose functional descriptions have the greatest similarity with the query. From the examples, it identifies tokens that are likely to appear in the final method name as well as their likely positions, picks up pivot words that are semantically related to tokens in the according method names, and specifies the evaluation results of the LLM on the selected examples. All such outputs (tokens with probabilities and position information, pivot words accompanied by associated name tokens and similarity scores, and evaluation results) together with the query and the selected examples are then filled in a predefined prompt template, resulting in a context-rich prompt. This context-rich prompt reduces the randomness of LLMs by focusing the LLM’s attention on relevant contexts, constraining the solution space, and anchoring results to meaningful semantic relationships. Consequently, the LLM leverages this prompt to generate the expected method name, producing a more accurate and relevant suggestion. We evaluated the proposed approach with 43k real-world Java and Python methods accompanied by functional descriptions. Our evaluation results suggested that it significantly outperforms the state-of-the-art approach RNN-att-Copy, improving the chance of exact match by 52% and decreasing the edit distance between generated and expected method names by 32%. Our evaluation results also suggested that the proposed approach worked well for various LLMs, including ChatGPT-3.5, ChatGPT-4, ChatGPT-4o, Gemini-1.5, and Llama-3.

Tue 24 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 12:30
Code Generation 2Research Papers / Journal First at Cosmos Hall
Chair(s): Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
10:30
20m
Talk
An Empirical Study of the Non-determinism of ChatGPT in Code Generation
Journal First
Shuyin Ouyang King's College London, Jie M. Zhang King's College London, Mark Harman Meta Platforms, Inc. and UCL, Meng Wang University of Bristol
10:50
20m
Talk
Don’t Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems
Journal First
Zhensu Sun Singapore Management University, Xiaoning Du Monash University, Fu Song Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences; Nanjing Institute of Software Technology, Shangwen Wang National University of Defense Technology, Mingze Ni University of Technology Sydney, Li Li Beihang University, David Lo Singapore Management University
11:10
20m
Talk
Divide-and-Conquer: Generating UI Code from Screenshots
Research Papers
Yuxuan Wan The Chinese University of Hong Kong, Chaozheng Wang The Chinese University of Hong Kong, Yi Dong The Chinese University of Hong Kong, Wenxuan Wang Chinese University of Hong Kong, Shuqing Li The Chinese University of Hong Kong, Yintong Huo Singapore Management University, Michael Lyu Chinese University of Hong Kong
DOI
11:30
20m
Talk
LLM-based Method Name Suggestion with Automatically Generated Context-Rich Prompts
Research Papers
Waseem Akram Beijing Institute of Technology, Yanjie Jiang Peking University, Yuxia Zhang Beijing Institute of Technology, Haris Ali Khan Beijing Institute of Technology, Hui Liu Beijing Institute of Technology
DOI
11:50
20m
Talk
Beyond Functional Correctness: Investigating Coding Style Inconsistencies in Large Language Models
Research Papers
Yanlin Wang Sun Yat-sen University, Tianyue Jiang Sun Yat-sen University, Mingwei Liu Sun Yat-Sen University, Jiachi Chen Sun Yat-sen University, Mingzhi Mao Sun Yat-sen University, Xilin Liu Huawei Cloud, Yuchi Ma Huawei Cloud Computing Technologies, Zibin Zheng Sun Yat-sen University
DOI
12:10
20m
Talk
Refining ChatGPT-Generated Code: Characterizing and Mitigating Code Quality Issues
Journal First
Yue Liu Monash University, Le-Cong Thanh The University of Melbourne, Ratnadira Widyasari Singapore Management University, Singapore, Kla Tantithamthavorn Monash University, Li Li Beihang University, Xuan-Bach D. Le University of Melbourne, David Lo Singapore Management University

Information for Participants
Tue 24 Jun 2025 10:30 - 12:30 at Cosmos Hall - Code Generation 2 Chair(s): Reyhaneh Jabbarvand
Info for room Cosmos Hall:

This is the main event hall of Clarion Hotel, which will be used to host keynote talks and other plenary sessions. The FSE and ISSTA banquets will also happen in this room.

The room is just in front of the registration desk, on the other side of the main conference area. The large doors with numbers “1” and “2” provide access to the Cosmos Hall.

:
:
:
: