Metamorphic-Based Many-Objective Distillation of LLMs for Code-related Tasks
This program is tentative and subject to change.
Knowledge distillation compresses large language models (LLMs) into more compact and efficient versions that achieve similar accuracy on code-related tasks. However, as we demonstrate in this study, compressed models are four times less robust than the original LLMs when evaluated with metamorphic code. They have a 440% higher probability of misclassifying code clones due to minor changes in the code fragment under analysis, such as replacing parameter names with synonyms. To address this issue, we propose MORPH, a method that combines metamorphic testing with many-objective optimization for a robust distillation of LLMs for code. MORPH efficiently explores the models’ configuration space and generates Paretooptimal models that effectively balance accuracy, efficiency, and robustness to metamorphic code. Metamorphic testing measures robustness as the number of code fragments for which a model incorrectly makes different predictions between the original and their equivalent metamorphic variants (prediction flips). We evaluate MORPH on two tasks—code clone and vulnerability detection—targeting CodeBERT and GraphCodeBERT for distillation. Our comparison includes MORPH, the state-of-theart distillation method AVATAR, and the fine-tuned non-distilled LLMs. Compared to AVATAR, MORPH produces compressed models that are (i) 47% more robust, (ii) 25% more efficient (fewer FLOPs), while maintaining (iii) equal or higher accuracy (up to +6%), and (iv) similar model size.
Annibale Panichella is an Associate Professor in the Software Engineering Research Group (SERG) at the Delft University of Technology (TU Delft) in the Netherlands. He is the head of the Computation Intelligence for Software Engineering Lab (CISELab) within SERG. His research interests include security testing, software testing, search-based software engineering, testing for AI, and empirical software engineering. He serves and has served as a program committee member of various international conferences (e.g., ICSE, ESEC/FSE, ISSTA, GECCO, ICST) and as a reviewer for various international journals (e.g., TSE, TOSEM, TEVC, EMSE, STVR) in the fields of software engineering and evolutionary computation.
This program is tentative and subject to change.
Wed 30 AprDisplayed time zone: Eastern Time (US & Canada) change
16:00 - 17:30 | |||
16:00 15mTalk | Faster Configuration Performance Bug Testing with Neural Dual-level Prioritization Research Track Youpeng Ma University of Electronic Science and Technology of China, Tao Chen University of Birmingham, Ke Li University of Exeter | ||
16:15 15mTalk | Metamorphic-Based Many-Objective Distillation of LLMs for Code-related Tasks Research Track Annibale Panichella Delft University of Technology | ||
16:30 15mTalk | NIODebugger: A Novel Approach to Repair Non-Idempotent-Outcome Tests with LLM-Based Agent Research Track Kaiyao Ke University of Illinois at Urbana-Champaign | ||
16:45 15mTalk | Test Intention Guided LLM-based Unit Test Generation Research Track Zifan Nan Huawei, Zhaoqiang Guo Software Engineering Application Technology Lab, Huawei, China, Kui Liu Huawei, Xin Xia Huawei | ||
17:00 15mTalk | What You See Is What You Get: Attention-based Self-guided Automatic Unit Test Generation Research Track Xin Yin Zhejiang University, Chao Ni Zhejiang University, xiaodanxu College of Computer Science and Technology, Zhejiang university, Xiaohu Yang Zhejiang University | ||
17:15 15mTalk | Improving Code Performance Using LLMs in Zero-Shot: RAPGen SE In Practice (SEIP) Spandan Garg Microsoft Corporation, Roshanak Zilouchian Moghaddam Microsoft, Neel Sundaresan Microsoft |