Recent advances in Large Language Model (LLM) based Generative AI techniques have made it feasible to translate enterprise-level code from legacy languages such as COBOL to modern languages such as Java or Python. While the results of LLM-based automatic transformation are encouraging, the resulting code cannot be trusted to correctly translate the original code. We propose a framework and a tool to help validate the equivalence of Cobol and translated Java. The results can also help repair the code if there are some issues and provide feedback to the AI model to improve. We have developed a symbolic-execution-based test generation to automatically generate unit tests for the source Cobol programs which also mocks the external resource calls. We generate equivalent JUnit test cases with equivalent mocking as Cobol and run them to check semantic equivalence between original and translated programs.
Tue 29 OctDisplayed time zone: Pacific Time (US & Canada) change
15:30 - 16:30 | Program and Code translationResearch Papers / Tool Demonstrations at Compagno Chair(s): Haiyan Zhao Peking University | ||
15:30 15mTalk | To Tag, or Not to Tag: Translating C’s Unions to Rust’s Tagged Unions Research Papers DOI | ||
15:45 15mTalk | Semi-Supervised Code Translation Overcoming the Scarcity of Parallel Code Data Research Papers Ming Zhu Virginia Tech, Mohimenul Karim Virginia Tech, Ismini Lourentzou Virginia Tech, Daphne Yao Virginia Tech | ||
16:00 15mTalk | A Joint Learning Model with Variational Interaction for Multilingual Program Translation Research Papers | ||
16:15 10mTalk | Automated Validation of COBOL to Java Transformation Tool Demonstrations Atul Kumar IBM Research India, Diptikalyan Saha IBM Research India, Toshiaki Yasue IBM Research - Tokyo, Kohichi Ono IBM Research - Tokyo, Saravanan Krishnan IBM India Research Lab, Sandeep Hans IBM India Research Lab, Fumiko Satoh IBM Research - Tokyo, Gerald Mitchell IBM Software, Sachin Kumar IBM Software |