Code Ranking with Structure Awareness Contrastive Learning
Large language models (LLMs) have revolutionized the field of programming for developers by automatically generating code based on natural language intent (NL intent). In numerous cases, LLMs can produce correct programs after several trials. As a result, a major challenge for this task is to select the most appropriate program from the multiple samples (also called code ranking) generated by LLMs. Recent popular approaches for code ranking involve the ranker-based methods, in which we train a ranker to classify the error in code using execution results (correct or error types) of code as supervised signals select the best program. However, existing ranker-based code ranking approaches rely on classification labels, which are highly sensitive to label distribution and show weak generalization ability to other distributions. In this paper, we introduce SACL-CR to address this challenge, a novel structure-aware contrastive learning framework for code ranking. This approach effectively addresses the generalization issues of existing ranker-based methods by integrating both code sequence and structural information. Encoders trained with this method can effectively identify errors in code, enhancing the model’s ability to differentiate between correct and incorrect code. Our research demonstrates that SACL-CR significantly enhances the pass@k accuracy of several code generation models, including CodeLlama and DeepseekCoder, on the HumanEval and MBPP datasets.
Mon 28 AprDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | Code GenerationResearch Track at 205 Chair(s): Coen De Roover Vrije Universiteit Brussel, Gema Rodríguez-Pérez University of British Columbia (UBC) | ||
14:00 10mTalk | Code Ranking with Structure Awareness Contrastive Learning Research Track Hailin Huang South China University of Technology, Liuwen Cao South China University of Technology, Jiexin Wang South China University of Technology, Tianchen Yu School of Software Engineering, South China University of Technology, Yi Cai School of Software Engineering, South China University of Technology, Guangzhou, China | ||
14:10 10mTalk | Algorithmic Inversion: A Learnable Algorithm Representation for Code Generation Research Track zhongyi shi Chinese Academy of Science Institute of Software, fuzhang wu Chinese Academy of Science Institute of Software, weibin zeng Chinese Academy of Science Institute of Software, yan kong Chinese Academy of Science Institute of Software, sicheng shen Chinese Academy of Science Institute of Software, Yanjun Wu Institute of Software, Chinese Academy of Sciences | ||
14:20 10mTalk | Studying How Configurations Impact Code Generation in LLMs: the Case of ChatGPT Research Track Benedetta Donato University of Milano - Bicocca, Leonardo Mariani University of Milano-Bicocca, Daniela Micucci University of Milano-Bicocca, Italy, Oliviero Riganelli University of Milano - Bicocca Pre-print | ||
14:30 10mTalk | Quality In, Quality Out: Investigating Training Data's Role in AI Code Generation Research Track Cristina Improta University of Naples Federico II, Rosalia Tufano Università della Svizzera Italiana, Pietro Liguori University of Naples Federico II, Domenico Cotroneo University of Naples Federico II, Gabriele Bavota Software Institute @ Università della Svizzera Italiana | ||
14:40 10mTalk | Advancing Large Language Models in Code Generation: USACO Benchmark and Bug Mitigation Insights Research Track Jacob Trentini Monte Vista High School, Victor Liu Seven Lakes High School, Yiming Peng Vandegrift High School, Ziliang Zong Texas State University | ||
14:50 10mTalk | Enhancing Code Generation for Low-Resource Languages: No Silver Bullet Research Track Alessandro Giagnorio Software Institute @ Università della Svizzera italiana, Alberto Martin-Lopez Software Institute - USI, Lugano, Gabriele Bavota Software Institute @ Università della Svizzera Italiana Pre-print | ||
15:00 10mTalk | COFT: Making Large Language Models Better zero-shot Learners for Code Generation Research Track Weijia Li Institute of Software, Chinese Academy of Sciences, Yongjie Qian Department of Computer Science, North China Electric Power University, Bao ding, Ke Gao Institute of Software, Chinese Academy of Sciences, Haixin Chen Institute of Computing Technology, Chinese Academy of Sciences, Xinyu Wang Institute of Software, Chinese Academy of Sciences, Yuchen Tong Institute of Computing Technology, Chinese Academy of Sciences, Ling Li Institute of Software, Chinese Academy of Sciences, Yanjun Wu Institute of Software, Chinese Academy of Sciences, Chen Zhao Institute of Software, Chinese Academy of Sciences | ||
15:10 10mTalk | On the Possibility of Breaking Copyleft Licenses When Reusing Code Generated by ChatGPT Research Track Gaia Colombo University of Milano - Bicocca, Leonardo Mariani University of Milano-Bicocca, Daniela Micucci University of Milano-Bicocca, Italy, Oliviero Riganelli University of Milano - Bicocca Pre-print | ||
15:20 10mLive Q&A | Session's Discussion: "Code Generation" Research Track |