SANER 2025
Tue 4 - Fri 7 March 2025 Montréal, Québec, Canada

This program is tentative and subject to change.

Wed 5 Mar 2025 11:18 - 11:30 at L-1710 - Empirical Studies & LLM

Language models have proven successful across a wide range of software engineering tasks, but their significant computational costs often hinder their practical adoption. To address this challenge, researchers have begun applying various compression strategies to improve the efficiency of language models for code. These strategies aim to optimize inference latency and memory usage, though often at the cost of reduced model effectiveness. However, there is still a significant gap in understanding how these strategies influence the efficiency and effectiveness of language models for code. Here, we empirically investigate the impact of three well-known compression strategies – Knowledge Distillation, Model Quantization and Model Pruning – across three different classes of software engineering tasks: defect prediction, code summarization, and code search. Our findings reveal that the impact of these strategies varies greatly depending on the task and the specific compression method employed. Practitioners and researchers can use these insights to make informed decisions when selecting the most appropriate compression strategy, balancing both efficiency and effectiveness based on their specific needs.

This program is tentative and subject to change.

Wed 5 Mar

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
11:00
15m
Talk
Beyond pip install: Evaluating LLM agents for the automated installation of Python projects
Research Papers
Louis Mark Milliken KAIST, Sungmin Kang Korea Advanced Institute of Science and Technology, Shin Yoo Korea Advanced Institute of Science and Technology
11:18
12m
Talk
On the Compression of Language Models for Code: An Empirical Study on CodeBERT
Research Papers
Giordano d'Aloisio University of L'Aquila, Luca Traini University of L'Aquila, Federica Sarro University College London, Antinisca Di Marco University of L'Aquila
Pre-print
11:30
15m
Talk
Can Large Language Models Discover Metamorphic Relations? A Large-Scale Empirical Study
Research Papers
Jiaming Zhang University of Science and Technology Beijing, Chang-ai Sun University of Science and Technology Beijing, Huai Liu Swinburne University of Technology, Sijin Dong University of Science and Technology Beijing
11:45
15m
Talk
Revisiting the Non-Determinism of Code Generation by the GPT-3.5 Large Language Model
Reproducibility Studies and Negative Results (RENE) Track
Salimata Sawadogo Centre d'Excellence Interdisciplinaire en Intelligence Artificielle pour le Développement (CITADEL), Aminata Sabané Université Joseph KI-ZERBO, Centre d'Excellence CITADELLE, Rodrique Kafando Centre d'Excellence Interdisciplinaire en Intelligence Artificielle pour le Développement (CITADEL), Tegawendé F. Bissyandé University of Luxembourg
12:00
15m
Talk
Language Models to Support Multi-Label Classification of Industrial Data
Industrial Track
Waleed Abdeen Blekinge Institute of Technology, Michael Unterkalmsteiner , Krzysztof Wnuk Blekinge Institute of Technology , Alessio Ferrari CNR-ISTI, Panagiota Chatzipetrou