FORGE 2025
Sun 27 - Mon 28 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025

This program is tentative and subject to change.

Sun 27 Apr 2025 14:00 - 14:12 at 207 - Session1: FM for Code Generation

Code Large Language Models (CodeLLMs) have demonstrated impressive proficiency in code completion tasks. However, they often fall short of fully understanding the extensive context of a project repository, such as the intricacies of relevant files and class hierarchies, which can result in less precise completions. To overcome these limitations, we present RepoHyper, a multifaceted framework designed to address the complex challenges associated with repository-level code completion. Central to is the Repo-level Semantic Graph (RSG), a novel semantic graph structure that encapsulates the vast context of code repositories. Furthermore, RepoHyper leverages \textit{Expand and Refine} retrieval method, including a graph expansion and a link prediction algorithm applied to the RSG, enabling the effective retrieval and prioritization of relevant code snippets. Our evaluations show that RepoHyper markedly outperforms existing techniques in repository-level code completion, showcasing enhanced accuracy across various datasets when compared to several strong baselines.

This program is tentative and subject to change.

Sun 27 Apr

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Session1: FM for Code Generation Research Papers / Data and Benchmarking at 207
14:00
12m
Long-paper
RepoHyper: Search-Expand-Refine on Semantic Graphs for Repository-Level Code Completion
Research Papers
Huy Nhat Phan FPT Software AI Center, Hoang Nhat Phan Nanyang Technological University, Tien N. Nguyen University of Texas at Dallas, Nghi D. Q. Bui Salesforce Research
14:12
12m
Long-paper
SoTaNa: An Open-Source Software Engineering Instruction-Tuned Model
Research Papers
Ensheng Shi Xi’an Jiaotong University, Yanlin Wang Sun Yat-sen University, Fengji Zhang Microsoft Research Asia, Bei Chen Microsoft Research Asia, Hongyu Zhang Chongqing University, yanli wang Sun Yat-sen University, Daya Guo Sun Yat-sen University, Lun Du Microsoft Research, Shi Han Microsoft Research, Dongmei Zhang Microsoft Research, Hongbin Sun Xi’an Jiaotong University
14:24
12m
Long-paper
Automated Codebase Reconciliation using Large Language Models
Research Papers
Aneri Gandhi University of Toronto, Sanjukta De Advanced Micro Devices, Marsha Chechik University of Toronto, Vinay Pandit Advanced Micro Devices, Max Kiehn Advanced Micro Devices, Matthieu Chan Chee Advanced Micro Devices, Yonas Bedasso Advanced Micro Devices
14:36
12m
Long-paper
AI-Powered, But Power-Hungry? Energy Efficiency of LLM-Generated Code
Research Papers
Lola Solovyeva University of Twente, Sophie Weidmann University of Twente, Fernando Castor University of Twente
14:48
6m
Short-paper
SwiftEval: Developing a Language-Specific Benchmark for LLM-generated Code Evaluation
Data and Benchmarking
14:54
6m
Short-paper
SE Arena: An Interactive Platform for Evaluating Foundation Models in Software Engineering
Research Papers
Zhimin Zhao Queen's University
15:00
12m
Long-paper
PerfCodeGen: Improving Performance of LLM Generated Code with Execution Feedback
Research Papers
Yun Peng The Chinese University of Hong Kong, Akhilesh Deepak Gotmare Salesforce Research, Michael Lyu The Chinese University of Hong Kong, Caiming Xiong Salesforce Research, Silvio Savarese Salesforce Research, Doyen Sahoo Salesforce Research
15:12
6m
Short-paper
HyRACC: A Hybrid Retrieval-Augmented Framework for More Efficient Code Completion
Research Papers
Chuanyi Li Nanjing University, Jiwei Shang Nanjing University, Yi Feng Nanjing University, Bin Luo Nanjing University
15:18
6m
Short-paper
OptCodeTrans: Boost LLMs on Low-Resource Programming Language Translation
Research Papers
Jianbo Lin Nanjing University, Yi Shen Nanjing University, Chuanyi Li Nanjing University, Changan Niu Software Institute, Nanjing University, Bin Luo Nanjing University