ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

This program is tentative and subject to change.

Mon 17 Nov 2025 15:00 - 15:10 at Vista - Code Generation 1

Code generation is a latency-sensitive task that demands high timeliness. However, with the growing interest and inherent difficulty in repository-level code generation, most existing code generation studies focus on improving the correctness of generated code while overlooking the inference efficiency, which is substantially affected by the overhead during LLM generation. Although there has been work on accelerating LLM inference, these approaches are not tailored to the specific characteristics of code generation, instead treating code the same as natural language sequences and ignoring its unique syntax and semantic characteristics, which are also crucial for improving efficiency. Consequently, these approaches exhibit limited effectiveness in code generation tasks, particularly for repository-level scenarios with considerable complexity and difficulty. To alleviate this issue, following draft-verification paradigm, we propose FastCoder, a simple yet highly efficient inference acceleration approach specifically designed for code generation, without compromising the quality of the output. FastCoder constructs a multi-source datastore, providing access to both general and project-specific knowledge, facilitating the retrieval of high-quality draft sequences. Moreover, FastCoder reduces the retrieval cost by controlling retrieval timing, and enhances efficiency through parallel retrieval and a context- and LLM preference-aware cache. Experimental results show that FastCoder can reach up to $2.53 \times$ and $2.54\times$ speedup compared to autoregressive decoding in repository-level and standalone code generation tasks, respectively, outperforming state-of-the-art inference acceleration approaches by up to $88%$. FastCoder can also be integrated with existing correctness-focused code generation approaches to accelerate the LLM generation process, and reach a speedup exceeding $2.6 \times$.

This program is tentative and subject to change.

Mon 17 Nov

Displayed time zone: Seoul change

14:00 - 15:30
14:00
10m
Talk
QuanBench: Benchmarking Quantum Code Generation with Large Language Models
Research Papers
Xiaoyu Guo Kyushu University, Minggu Wang Kyushu University, Jianjun Zhao Kyushu University
14:10
10m
Talk
Token Sugar: Making Source Code Sweeter for LLMs through Token-Efficient Shorthand
Research Papers
Zhensu Sun Singapore Management University, Chengran Yang Singapore Management University, Singapore, Xiaoning Du Monash University, Zhou Yang University of Alberta, Alberta Machine Intelligence Institute , Li Li Beihang University, David Lo Singapore Management University
14:20
10m
Talk
FGIT: Fault-Guided Fine-Tuning for Code Generation
Research Papers
Lishui Fan Zhejiang University, Zhongxin Liu Zhejiang University, Haoye Wang Hangzhou City University, Lingfeng Bao Zhejiang University, Xin Xia Zhejiang University, Shanping Li Zhejiang University
14:30
10m
Talk
Mixture-of-Experts Low-Rank Adaptation for Multilingual Code Summarization
Research Papers
Tianchen Yu School of Software Engineering, South China University of Technology, Li Yuan School of Software Engineering, South China University of Technology, Guangzhou, China, Hailin Huang South China University of Technology, Jiexin Wang South China University of Technology, Yi Cai School of Software Engineering, South China University of Technology, Guangzhou, China
14:40
10m
Talk
EfficientEdit: Accelerating Code Editing via Edit-Oriented Speculative Decoding
Research Papers
Peiding Wang Beihang university, Li Zhang Beihang University, Fang Liu Beihang University, Yinghao Zhu Beihang University, Wang Xu Tsinghua University, Lin Shi Beihang University, Xiaoli Lian Beihang University, China, Minxiao Li Beihang university, Bo Shen Huawei Cloud Computing Technologies Co., Ltd., Binzhang Fu Huawei Technologies, n.n.
Pre-print
14:50
10m
Talk
Bias Testing and Mitigation in LLM-based Code Generation
Journal-First Track
Dong Huang The University of Hong Kong, Jie M. Zhang King's College London, Qingwen Bu Shanghai Jiao Tong University, Xiaofei Xie Singapore Management University, Junjie Chen Tianjin University, Heming Cui University of Hong Kong
15:00
10m
Talk
FastCoder: Accelerating Repository-level Code Generation via Efficient Retrieval and Verification
Research Papers
Qianhui Zhao Beihang University, Li Zhang Beihang University, Fang Liu Beihang University, Xiaoli Lian Beihang University, China, Meng Qiaoyuanhe Beihang University, Ziqian Jiao Beihang University, Zetong Zhou Beihang University, Jia Li , Lin Shi Beihang University
Pre-print
15:10
10m
Talk
AlignCoder: Aligning Retrieval with Target Intent for Repository-Level Code Completion
Research Papers
Tianyue Jiang Sun Yat-sen University, Yanli Wang Sun Yat-sen University, Yanlin Wang Sun Yat-sen University, Daya Guo , Ensheng Shi Huawei, Yuchi Ma Huawei Cloud Computing Technologies, Jiachi Chen Sun Yat-sen University, Zibin Zheng Sun Yat-sen University
15:20
10m
Talk
Effectiveness of symmetric metamorphic relations on validating the stability of code generation LLM
Journal-First Track
Chan Pak Yuen Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, China, Jacky Keung City University of Hong Kong, Zhen Yang Shandong University