Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code
Code translation aims to convert source code from one programming language (PL) to another. Given the promising abilities of large language models (LLMs) in code synthesis, researchers are exploring their potential to automate code translation. The prerequisite for advancing the state of LLM-based code translation is to understand their promises and limitations over existing techniques. To that end, we present a large-scale empirical study to investigate the ability of general LLMs and code LLMs for code translation across pairs of different languages, including C, C++, Go, Java, and Python. Our study, which involves the translation of 1,700 code samples from three benchmarks and two real-world projects, reveals that LLMs are yet to be reliably used to automate code translation—with correct translations ranging from 2.1% to 47.3% for the studied LLMs. Further manual investigation of unsuccessful translations identifies 15 categories of translation bugs. We also compare LLM-based code translation with traditional non-LLM-based approaches. Our analysis shows that these two classes of techniques have their own strengths and weaknesses. Finally, insights from our study suggest that providing more context to LLMs during translation can help them produce better results. To that end, we propose a prompt-crafting approach based on the symptoms of erroneous translations; this improves the performance of LLM-based code translation by 5.5% on average. Our study is the first of its kind, in terms of scale and breadth, that provides insights into the current limitations of LLMs in code translation and opportunities for improving them. Our dataset—consisting of 1,700 code samples in five PLs with 10K+ tests, 43K+ translated code, 1,748 manually labeled bugs, and 1,365 bug-fix pairs—can help drive research in this area
Fri 19 AprDisplayed time zone: Lisbon change
16:00 - 17:30 | Language Models and Generated Code 4New Ideas and Emerging Results / Research Track at Almada Negreiros Chair(s): Shin Yoo Korea Advanced Institute of Science and Technology | ||
16:00 15mTalk | Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code Research Track Rangeet Pan IBM Research, Ali Reza Ibrahimzada University of Illinois Urbana-Champaign, Rahul Krishna IBM Research, Divya Sankar IBM Research, Lambert Pouguem Wassi IBM Research, Michele Merler IBM Research, Boris Sobolev IBM Research, Raju Pavuluri IBM T.J. Watson Research Center, Saurabh Sinha IBM Research, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign DOI Pre-print Media Attached | ||
16:15 15mTalk | Traces of Memorisation in Large Language Models for Code Research Track Ali Al-Kaswan Delft University of Technology, Netherlands, Maliheh Izadi Delft University of Technology, Arie van Deursen Delft University of Technology Pre-print | ||
16:30 15mTalk | Language Models for Code Completion: A Practical Evaluation Research Track Maliheh Izadi Delft University of Technology, Jonathan Katzy Delft University of Technology, Tim van Dam Delft University of Technology, Marc Otten Delft University of Technology, Răzvan Mihai Popescu Delft University of Technology, Arie van Deursen Delft University of Technology Pre-print | ||
16:45 15mTalk | Evaluating Large Language Models in Class-Level Code Generation Research Track Xueying Du Fudan University, Mingwei Liu Fudan University, Kaixin Wang Fudan University, Hanlin Wang Fudan University, Junwei Liu Huazhong University of Science and Technology, Yixuan Chen Fudan University, Jiayi Feng Fudan University, Chaofeng Sha Fudan University, Xin Peng Fudan University, Yiling Lou Fudan University Pre-print | ||
17:00 7mTalk | Naturalness of Attention: Revisiting Attention in Code Language Models New Ideas and Emerging Results Pre-print | ||
17:07 7mTalk | Towards Trustworthy AI Software Development Assistance New Ideas and Emerging Results DOI Pre-print |