Write a Blog >>
Thu 13 Oct 2022 16:40 - 17:00 at Ballroom C East - Technical Session 29 - AI for SE II Chair(s): Tim Menzies

Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in the daily workflow of software developers: these large models consume hundreds of megabytes of memory and run slowly especially on personal devices, which causes problems in model deployment and greatly degrades the user experience.

It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. To tackle this problem, Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model’s Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e, vulnerability prediction and clone detection. We use the proposed method to compress models to a size (3 MB), which is only 0.6% of the original model size. The results show that compressed CodeBERT and GraphCodeBERT reduce the inference latency by 70.75% and 79.21%, respectively. More importantly, they maintain 96.15% and 97.74% of the original performance on the vulnerability prediction task. They even maintain higher ratios (99.20% and 97.52%) of the original performance on the clone detection task.

Thu 13 Oct

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 18:00
Technical Session 29 - AI for SE IIResearch Papers / Journal-first Papers at Ballroom C East
Chair(s): Tim Menzies North Carolina State University
16:00
20m
Research paper
Are Neural Bug Detectors Comparable to Software Developers on Variable Misuse Bugs?
Research Papers
Cedric Richter University of Oldenburg, Jan Haltermann University of Oldenburg, Marie-Christine Jakobs Technical University of Darmstadt, Felix Pauck Paderborn University, Germany, Stefan Schott Paderborn University, Heike Wehrheim University of Oldenburg
DOI Pre-print Media Attached File Attached
16:20
20m
Research paper
Learning Contract Invariants Using Reinforcement Learning
Research Papers
Junrui Liu University of California, Santa Barbara, Yanju Chen University of California at Santa Barbara, Bryan Tan Amazon Web Services, Işıl Dillig University of Texas at Austin, Yu Feng University of California at Santa Barbara
16:40
20m
Research paper
Compressing Pre-trained Models of Code into 3 MB
Research Papers
Jieke Shi Singapore Management University, Zhou Yang Singapore Management University, Bowen Xu School of Information Systems, Singapore Management University, Hong Jin Kang Singapore Management University, Singapore, David Lo Singapore Management University
DOI Pre-print Media Attached
17:00
20m
Research paper
A Transferable Time Series Forecasting Service using Deep Transformer model for Online SystemsVirtual
Research Papers
Tao Huang Tencent, Pengfei Chen Sun Yat-Sen University, Jingrun Zhang School of Data and Computer Science, Sun Yat-sen University, Ruipeng Li Tencent, Rui Wang Tencent
17:20
20m
Paper
The Weights can be Harmful: Pareto Search versus Weighted Search in Multi-Objective Search-Based Software EngineeringVirtual
Journal-first Papers
Tao Chen Loughborough University, Miqing Li University of Birmingham
Pre-print
17:40
20m
Research paper
Robust Learning of Deep Predictive Models from Noisy and Imbalanced Software Engineering DatasetsVirtual
Research Papers
Zhong Li Nanjing, Minxue Pan Nanjing University, Yu Pei Hong Kong Polytechnic University, Tian Zhang Nanjing University, Linzhang Wang Nanjing University, Xuandong Li Nanjing University