Developers often evolve an existing software system by making internal changes, called migration. Moving to a new framework, changing implementation to improve efficiency, and upgrading a dependency to its latest version are examples of migrations.
Migration is a common and typically continuous maintenance task undertaken either manually or through tooling. Certain migrations are labor intensive and costly, developers do not find the required work rewarding, and they may take years to complete. Hence, automation is preferred for such migrations.
In this paper, we discuss a large-scale, costly and traditionally manual migration project at Google, propose a novel automated algorithm that uses change location discovery and a Large Language Model (LLM) to aid developers conduct the migration, report the results of a large case study, and discuss lessons learned.
Our case study on 39 distinct migrations undertaken by three developers over twelve months shows that a total of 595 code changes with 93,574 edits have been submitted, where 74.45% of the code changes and 69.46% of the edits were generated by the LLM. The developers reported high satisfaction with the automated tooling, and estimated a 50% reduction on the total time spent on the migration compared to earlier manual migrations.
Our results suggest that our automated, LLM-assisted workflow can serve as a model for similar initiatives.
Tue 24 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
14:00 - 15:30 | LLM for SE 2Research Papers / Industry Papers / Ideas, Visions and Reflections at Cosmos Hall Chair(s): Jialun Cao Hong Kong University of Science and Technology | ||
14:00 20mTalk | Migrating Code At Scale With LLMs At Google Industry Papers Celal Ziftci Google, Stoyan Nikolov Google, Inc., Anna Sjovall Google, Inc., Bo Kim Google, Daniele Codecasa Google, Inc., Max Kim Google | ||
14:20 20mTalk | Integrating Large Language Models and Reinforcement Learning for Non-Linear Reasoning Research Papers DOI | ||
14:40 20mTalk | Smaller but Better: Self-Paced Knowledge Distillation for Lightweight yet Effective LCMs Research Papers Yujia Chen Harbin Institute of Technology, Shenzhen, Yang Ye Huawei Cloud Computing Technologies Co., Ltd., Zhongqi Li Huawei Cloud Computing Technologies Co., Ltd., Yuchi Ma Huawei Cloud Computing Technologies, Cuiyun Gao Harbin Institute of Technology, Shenzhen DOI | ||
15:00 10mTalk | Enabling Scalable Proactive Workspaces With Environment-Wide Context Ideas, Visions and Reflections Nick Bradley University of British Columbia, Thomas Fritz University of Zurich, Reid Holmes University of British Columbia | ||
15:10 20mTalk | Bridging Operator Semantic Inconsistencies: A Source-level Cross-framework Model Conversion Approach Research Papers Xingpei Li National University of Defense Technology, China, Yan Lei Chongqing University, Zhouyang Jia National University of Defense Technology, Yuanliang Zhang National University of Defense Technology, Haoran Liu National University of Defense Technology, Liqian Chen National University of Defense Technology, Wei Dong National University of Defense Technology, Shanshan Li National University of Defense Technology DOI |
This is the main event hall of Clarion Hotel, which will be used to host keynote talks and other plenary sessions. The FSE and ISSTA banquets will also happen in this room.
The room is just in front of the registration desk, on the other side of the main conference area. The large doors with numbers “1” and “2” provide access to the Cosmos Hall.