TCSE logo 
 Sigsoft logo
Sustainability badge

This program is tentative and subject to change.

Wed 30 Apr 2025 16:30 - 16:45 at Canada Hall 1 and 2 - AI for SE 2

This paper proposes an intention-based code refinement technique, transforming the conventional code refinement process from comment to code to intention to code. The process is decomposed into two phases: Intention Extraction and Intention Guided Code Modification Generation. Intention Extraction categorizes comments using predefined templates, while the latter employs large language models (LLMs) to generate revised code based on these defined intentions. Three categories with eight subcategories are designed for comment transformation, followed by a hybrid approach that combines rule-based and LLM-based classifiers for accurate classification. Extensive experiments with five LLMs (GPT4o, GPT3.5, DeepSeekV2, DeepSeek7B, CodeQwen7B) under different prompting settings demonstrate that our approach achieves 79% accuracy in intention extraction and up to 66% in code refinement generation. Our results underscore the potential of this approach in enhancing data quality and improving code refinement processes.

This program is tentative and subject to change.

Wed 30 Apr

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 17:30
16:00
15m
Talk
Large Language Models for Safe Minimization
Research Track
Aashish Yadavally University of Texas at Dallas, xiaokai rong The University of Texas at Dallas, Phat Nguyen The University of Texas at Dallas, Tien N. Nguyen University of Texas at Dallas
16:15
15m
Talk
LUNA: A Model-Based Universal Analysis Framework for Large Language Models
Journal-first Papers
Da Song University of Alberta, Xuan Xie University of Alberta, Jiayang Song University of Alberta, Derui Zhu Technical University of Munich, Yuheng Huang University of Alberta, Canada, Felix Juefei-Xu New York University, Lei Ma The University of Tokyo & University of Alberta, Yuheng Huang University of Alberta, Canada
16:30
15m
Talk
Intention is All You Need: Refining Your Code from Your Intention
Research Track
Qi Guo Tianjin University, Xiaofei Xie Singapore Management University, Shangqing Liu Nanyang Technological University, Ming Hu Nanyang Technological University, Xiaohong Li Tianjin University, Lei Bu Nanjing University
16:45
15m
Talk
RLCoder: Reinforcement Learning for Repository-Level Code Completion
Research Track
Yanlin Wang Sun Yat-sen University, yanli wang Sun Yat-sen University, Daya Guo , Jiachi Chen Sun Yat-sen University, Ruikai Zhang Huawei Cloud Computing Technologies, Yuchi Ma Huawei Cloud Computing Technologies, Zibin Zheng Sun Yat-sen University
17:00
15m
Talk
InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation
Research Track
Marcos Macedo Queen's University, Yuan Tian Queen's University, Kingston, Ontario, Pengyu Nie University of Waterloo, Filipe Cogo Centre for Software Excellence, Huawei Canada, Bram Adams Queen's University
17:15
15m
Talk
Toward a Theory of Causation for Interpreting Neural Code Models
Journal-first Papers
David Nader Palacio William & Mary, Alejandro Velasco William & Mary, Nathan Cooper William & Mary, Alvaro Rodriguez Universidad Nacional de Colombia, Kevin Moran University of Central Florida, Denys Poshyvanyk William & Mary
:
:
:
: