ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal

Large language models of code have shown remarkable effectiveness across various software engineering tasks. Despite the availability of many cloud services built upon these powerful models, there remain several scenarios where developers cannot take full advantage of them, stemming from factors such as restricted or unreliable internet access, institutional privacy policies that prohibit external transmission of code to third-party vendors, and more. Therefore, developing a compact, efficient, and yet energy-saving model for deployment on developers’ devices becomes essential.

To this aim, we propose Avatar, a novel approach that crafts a deployable model from a large language model of code by optimizing it in terms of model size, inference latency, energy consumption, and carbon footprint while maintaining a comparable level of effectiveness (e.g., prediction accuracy on downstream tasks). The key idea of Avatar is to formulate the optimization of language models as a multi-objective configuration tuning problem and solve it with the help of a Satisfiability Modulo Theories (SMT) solver and a tailored optimization algorithm. The SMT solver is used to form an appropriate configuration space, while the optimization algorithm identifies the Pareto-optimal set of configurations for training the optimized models using knowledge distillation. We evaluate Avatar with two popular language models of code, i.e., CodeBERT and GraphCodeBERT, on two popular tasks, i.e., vulnerability prediction and clone detection. We use Avatar to produce optimized models with a small size (3 MB), which is 160× smaller than the original large models. On the two tasks, the optimized models significantly reduce the energy consumption (up to 184× less), carbon footprint (up to 157× less), and inference latency (up to 76× faster), with only a negligible loss in effectiveness (1.67% on average). Compared to the state-of-the-art approach, Avatar also optimizes language models of code more effectively in all metrics.

Fri 19 Apr

Displayed time zone: Lisbon change

16:00 - 17:30
16:00
15m
Talk
Predicting Performance and Accuracy of Mixed-Precision Programs for Precision Tuning
Research Track
Yutong Wang University of California, Davis, Cindy Rubio-González University of California at Davis
16:15
15m
Talk
A Synthesis of Green Architectural Tactics for ML-Enabled Systems
Software Engineering in Society
Heli Järvenpää Vrije Universiteit Amsterdam, Patricia Lago Vrije Universiteit Amsterdam, Justus Bogner Vrije Universiteit Amsterdam, Grace Lewis Carnegie Mellon Software Engineering Institute, Henry Muccini University of L'Aquila, Italy, Ipek Ozkaya Carnegie Mellon University
Pre-print
16:30
15m
Talk
Greening Large Language Models of Code
Software Engineering in Society
Jieke Shi Singapore Management University, Zhou Yang Singapore Management University, Hong Jin Kang UCLA, Bowen Xu North Carolina State University, Junda He Singapore Management University, David Lo Singapore Management University
Pre-print Media Attached
16:45
15m
Talk
Lessons from Building CodeBuddy: A Contextualized AI Coding Assistant
Software Engineering in Practice
Gustavo Pinto Federal University of Pará (UFPA) and Zup Innovation, Cleidson de Souza Federal University of Pará Belém, João Batista Cordeiro Neto Federal University of Santa Catarina and Zup Innovation, Alberto de Souza Zup Innovation, Tarcísio Gotto Zup Innovation, Edward Monteiro StackSpot
17:00
15m
Talk
CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model
Software Engineering in Practice
Peng Di Ant Group, Jianguo Li Ant Group, Hang Yu Ant Group, Wei Jiang Ant Group
17:15
7m
Talk
Breaking the Silence: the Threats of Using LLMs in Software Engineering
New Ideas and Emerging Results
June Sallou Delft University of Technology, Thomas Durieux TU Delft, Annibale Panichella Delft University of Technology
Pre-print