Unraveling the Potential of Large Language Models in Code Translation: How Far Are We?
This program is tentative and subject to change.
While large language models (LLMs) exhibit state-of-the-art performance in various tasks, recent studies have revealed their struggle for code translation. This is because they haven’t been extensively pre-trained with parallel multilingual code, which code translation heavily depends on. Moreover, existing benchmarks only cover a limited subset of common programming languages, and thus cannot reflect the full potential of LLMs in code translation. In this paper, we conduct a large-scale empirical study to exploit the capabilities and incapabilities of LLMs in code translation tasks. We first craft a novel benchmark called PolyHumanEval by extending HumanEval to a multilingual benchmark of 14 languages. With PolyHumanEval, we then perform over 110,000 translations with bleeding-edge code LLMs. The result shows LLMs’ suboptimal performance on Python to other languages and the negligible impact of widely adopted LLM optimization techniques such as conventional pre-training and instruction tuning on code translation. To further uncover the potential of LLMs in code translation, we propose two methods: (1) intermediary translation which selects an intermediary language between the source and target ones; and (2) self-training which fine-tunes LLMs on self-generated parallel data. Evaluated with CodeLlama-13B, our approach yields an average improvement of 11.7% computation accuracy on Python-to-other translations. Notably, we interestingly find that Go can serve as a lingua franca for translating between any two studied languages.
This program is tentative and subject to change.
Wed 4 DecDisplayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change
14:00 - 15:30 | |||
14:00 30mTalk | Unraveling the Potential of Large Language Models in Code Translation: How Far Are We? Technical Track Qingxiao Tao School of Software, Shanghai Jiao Tong University, Shanghai, China, Tingrui Yu School of Software, Shanghai Jiao Tong University, Shanghai, China, Xiaodong Gu Shanghai Jiao Tong University, Beijun Shen Shanghai Jiao Tong University | ||
14:30 30mTalk | Effective Vulnerability Detection over Code Token Graph: A GCN with Score Gate Based Approach Technical Track Nong Zou Southwest University, Nan Li Southwest University, Junxiang Zhang Southwest University, Xiaomeng Wang Southwest University, Hong Lai Southwest University, Tao Jia Southwest University | ||
15:00 30mTalk | Putting APIs in the Right Order with Gated Graph Neural Networks Technical Track |