OptCodeTrans: Boost LLMs on Low-Resource Programming Language Translation
This program is tentative and subject to change.
Program translation aims to translate source code from one programming language(PL) to another. Current research on code translation predominantly focuses on high-resource PLs like Python and Java, leaving low-resource languages insufficiently explored. Fortunately, the rapid advancement of Large Language Models(LLMs) has created new opportunities for research on low-resource PLs. To mitigate this gap in the era of foundation models, we introduce OptCodeTrans, a two-phase post-training approach involving continued pre-training and instruction fine-tuning. We provide a high-quality dataset of three low-resource languages representing different programming paradigms, including Cangjie, Julia, and OCaml. Our work provides valuable insights into effective post-training strategies for adapting LLMs to low-resource code translation tasks. Extensive experiments demonstrate the effectiveness of OptCodeTrans, achieving an average improvement of 10.28 in BLEU and 5.15 in functional equivalence across all translation tasks and backbone models.
This program is tentative and subject to change.
Sun 27 AprDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | |||
14:00 12mLong-paper | RepoHyper: Search-Expand-Refine on Semantic Graphs for Repository-Level Code Completion Research Papers Huy Nhat Phan FPT Software AI Center, Hoang Nhat Phan Nanyang Technological University, Tien N. Nguyen University of Texas at Dallas, Nghi D. Q. Bui Salesforce Research | ||
14:12 12mLong-paper | SoTaNa: An Open-Source Software Engineering Instruction-Tuned Model Research Papers Ensheng Shi Xi’an Jiaotong University, Yanlin Wang Sun Yat-sen University, Fengji Zhang Microsoft Research Asia, Bei Chen Microsoft Research Asia, Hongyu Zhang Chongqing University, yanli wang Sun Yat-sen University, Daya Guo Sun Yat-sen University, Lun Du Microsoft Research, Shi Han Microsoft Research, Dongmei Zhang Microsoft Research, Hongbin Sun Xi’an Jiaotong University | ||
14:24 12mLong-paper | Automated Codebase Reconciliation using Large Language Models Research Papers Aneri Gandhi University of Toronto, Sanjukta De Advanced Micro Devices, Marsha Chechik University of Toronto, Vinay Pandit Advanced Micro Devices, Max Kiehn Advanced Micro Devices, Matthieu Chan Chee Advanced Micro Devices, Yonas Bedasso Advanced Micro Devices | ||
14:36 12mLong-paper | AI-Powered, But Power-Hungry? Energy Efficiency of LLM-Generated Code Research Papers Lola Solovyeva University of Twente, Sophie Weidmann University of Twente, Fernando Castor University of Twente | ||
14:48 6mShort-paper | SwiftEval: Developing a Language-Specific Benchmark for LLM-generated Code Evaluation Data and Benchmarking | ||
14:54 6mShort-paper | SE Arena: An Interactive Platform for Evaluating Foundation Models in Software Engineering Research Papers Zhimin Zhao Queen's University | ||
15:00 12mLong-paper | PerfCodeGen: Improving Performance of LLM Generated Code with Execution Feedback Research Papers Yun Peng The Chinese University of Hong Kong, Akhilesh Deepak Gotmare Salesforce Research, Michael Lyu The Chinese University of Hong Kong, Caiming Xiong Salesforce Research, Silvio Savarese Salesforce Research, Doyen Sahoo Salesforce Research | ||
15:12 6mShort-paper | HyRACC: A Hybrid Retrieval-Augmented Framework for More Efficient Code Completion Research Papers Chuanyi Li Nanjing University, Jiwei Shang Nanjing University, Yi Feng Nanjing University, Bin Luo Nanjing University | ||
15:18 6mShort-paper | OptCodeTrans: Boost LLMs on Low-Resource Programming Language Translation Research Papers Jianbo Lin Nanjing University, Yi Shen Nanjing University, Chuanyi Li Nanjing University, Changan Niu Software Institute, Nanjing University, Bin Luo Nanjing University |