ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Wed 30 Oct 2024 10:45 - 11:00 at Gardenia - Code generation 2 Chair(s): Yangruibo Ding

\textit{Large Language Models} (LLMs), such as GPT-4, StarCoder, and Code Llama, are transforming the way developers approach programming by automatically generating code based on given contexts, such as natural language descriptions or incomplete surrounding code. Despite advancements, generating syntactically and semantically correct code remains challenging, especially for complex programming tasks. Typically, individuals generate multiple candidate solutions using LLMs to increase the likelihood of producing correct code. However, selecting the correct code from these candidates — a process known as code ranking — remains a major challenge. Current research on code ranking can be categorized into execution-based and non-execution-based methods. Execution-based methods, although effective, encounter notable limitations, such as scarcity of quality unit tests and security risks. Non-execution-based methods like CodeRanker, which rely solely on classification labels to train a code ranker, struggle to capture subtle errors and provide detailed error insights. Recognizing the strengths and limitations of both approaches, we propose a new method that integrates the advantages of execution-based and non-execution-based techniques. The key insight of our work is that an effective code ranker is expected to genuinely comprehend the underlying causes of erroneous code, as relying solely on classification labels is insufficient. Inspired by this, this paper puts forward RankEF, an innovative approach for code ranking that leverages execution feedback. RankEF employs multi-task learning to integrate code classification with execution feedback generation. This approach enables the model to understand the reasons behind incorrect code, distinguishing between correct and incorrect solutions without the need to execute the code during the ranking phase. Experiments on three code generation benchmarks—APPS, MBPP, and HumanEval—demonstrate that RankEF significantly outperforms the state-of-the-art CodeRanker, achieving relative improvements of +30.97%, +31.43%, and +19.51% for Pass@1, Pass@2, and Pass@5 on APPS, respectively.

Wed 30 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
Code generation 2Research Papers / Tool Demonstrations at Gardenia
Chair(s): Yangruibo Ding Columbia University
10:30
15m
Talk
Preference-Guided Refactored Tuning for Retrieval Augmented Code Generation
Research Papers
Xinyu Gao , Yun Xiong Fudan University, Deze Wang National University of Defense Technology, Zhenhan Guan Fudan University, Zejian Shi Fudan University, Haofen Wang Tongji University, Shanshan Li National University of Defense Technology
Pre-print
10:45
15m
Talk
Sifting through the Chaff: On Utilizing Execution Feedback for Ranking the Generated Code Candidates
Research Papers
Zhihong Sun Shandong Normal University, Yao Wan Huazhong University of Science and Technology, Jia Li , Hongyu Zhang Chongqing University, Zhi Jin Peking University, Ge Li Peking University, Chen Lyu Shandong Normal University
11:00
15m
Talk
Promise and Peril of Collaborative Code Generation Models: Balancing Effectiveness and Memorization
Research Papers
Zhi Chen Singapore Management University, Lingxiao Jiang Singapore Management University
Pre-print
11:15
15m
Talk
JavaBench: A Benchmark of Object-Oriented Code Generation for Evaluating Large Language Models
Research Papers
Jialun Cao Hong Kong University of Science and Technology, Zhiyong Chen Nanjing University, Jiarong Wu The Hong Kong University of Science and Technology, Shing-Chi Cheung Hong Kong University of Science and Technology, Chang Xu Nanjing University
11:30
15m
Talk
PACGBI: A Pipeline for Automated Code Generation from Backlog Items
Tool Demonstrations
Mahja Sarschar Hochschule für Technik und Wirtschaft Berlin, Gefei Zhang HTW Berlin, Annika Nowak Capgemini
11:45
15m
Talk
Contextualized Data-Wrangling Code Generation in Computational Notebooks
Research Papers
Junjie Huang The Chinese University of Hong Kong, Daya Guo Sun-yat Sen University, Chenglong Wang Microsoft Research, Jiazhen Gu Chinese University of Hong Kong, Shuai Lu Microsoft Research, Jeevana Priya Inala Microsoft Research, Cong Yan Microsoft Research, Jianfeng Gao Microsoft Research, Nan Duan Microsoft Research, Michael Lyu The Chinese University of Hong Kong