One Adapter for All Programming Languages? Adapter Tuning for Multilingual Tasks in Software Engineering
As pre-trained models automate many code intelligence tasks, a widely used paradigm is to fine-tune a model on the task dataset for each programming language. A recent study reported that multilingual fine-tuning benefits a range of tasks and models. However, we find that multilingual fine-tuning leads to performance degradation on recent models UniXcoder and CodeT5.
To alleviate the potentially catastrophic forgetting issue in multilingual models, we fix all pre-trained model parameters, insert the parameter-efficient structure adapter, and fine-tune it. Updating only 0.6% of the overall parameters compared to full-model fine-tuning for each programming language, adapter tuning yields consistent improvements on code search and summarization tasks, achieving state-of-the-art results. In addition, we experimentally show its effectiveness in cross-lingual and low-resource scenarios. Multilingual fine-tuning with 200 samples per programming language approaches the results fine-tuned with the entire dataset on code summarization. Our experiments on three probing tasks show that adapter tuning significantly outperforms full-model fine-tuning and effectively overcomes catastrophic forgetting.
Wed 17 MayDisplayed time zone: Hobart change
11:00 - 12:30 | AI models for SEJournal-First Papers / Technical Track / DEMO - Demonstrations / NIER - New Ideas and Emerging Results at Level G - Plenary Room 1 Chair(s): Denys Poshyvanyk College of William and Mary | ||
11:00 15mTalk | One Adapter for All Programming Languages? Adapter Tuning for Multilingual Tasks in Software Engineering Technical Track Deze Wang National University of Defense Technology, Boxing Chen , Shanshan Li National University of Defense Technology, Wei Luo , Shaoliang Peng Hunan University, Wei Dong School of Computer, National University of Defense Technology, China, Liao Xiangke National University of Defense Technology | ||
11:15 15mTalk | CCRep: Learning Code Change Representations via Pre-Trained Code Model and Query Back Technical Track Zhongxin Liu Zhejiang University, Zhijie Tang Zhejiang University, Xin Xia Huawei, Xiaohu Yang Zhejiang University Pre-print | ||
11:30 15mTalk | Keeping Pace with Ever-Increasing Data: Towards Continual Learning of Code Intelligence Models Technical Track Shuzheng Gao Harbin institute of technology, Hongyu Zhang The University of Newcastle, Cuiyun Gao Harbin Institute of Technology, Chaozheng Wang Harbin Institute of Technology | ||
11:45 7mTalk | PCR-Chain: Partial Code Reuse Assisted by Hierarchical Chaining of Prompts on Frozen Copilot DEMO - Demonstrations Qing Huang School of Computer Information Engineering, Jiangxi Normal University, Jiahui Zhu School of Computer Information Engineering, Jiangxi Normal University, Zhilong Li School of Computer Information Engineering, Jiangxi Normal University, Zhenchang Xing , Changjing Wang School of Computer Information Engineering, Jiangxi Normal University, Xiwei (Sherry) Xu CSIRO’s Data61 | ||
11:52 7mTalk | Towards Learning Generalizable Code Embeddings using Task-agnostic Graph Convolutional Networks Journal-First Papers Zishuo Ding Concordia University, Heng Li Polytechnique Montréal, Weiyi Shang University of Waterloo, Tse-Hsun (Peter) Chen Concordia University | ||
12:00 7mTalk | deGraphCS: Embedding Variable-based Flow Graph for Neural Code Search Journal-First Papers Chen Zeng National University of Defense Technology, Yue Yu College of Computer, National University of Defense Technology, Changsha 410073, China, Shanshan Li National University of Defense Technology, Xin Xia Huawei, Wang Zhiming National University of Defense Technology, Mingyang Geng National University of Defense Technology, Linxiao Bai National University of Defense Technology, Wei Dong School of Computer, National University of Defense Technology, China, Liao Xiangke National University of Defense Technology | ||
12:07 7mTalk | CodeS: Towards Code Model Generalization Under Distribution Shift NIER - New Ideas and Emerging Results Qiang Hu University of Luxembourg, Yuejun GUo University of Luxembourg, Xiaofei Xie Singapore Management University, Maxime Cordy University of Luxembourg, Luxembourg, Lei Ma University of Alberta, Mike Papadakis University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg | ||
12:15 7mTalk | Towards using Few-Shot Prompt Learning for Automating Model Completion NIER - New Ideas and Emerging Results Meriem Ben Chaaben Université de Montréal, DIRO, Lola Burgueño University of Malaga, Houari Sahraoui Université de Montréal |