ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Thu 18 Apr 2024 14:00 - 15:30 at Eugénio de Andrade - Technical Briefings 5 Chair(s): Fatemeh Hendijani Fard

Large Language Models (LLMs) have gained much attention in the Software Engineering (SE) community, specifically for code-related tasks. Though a common approach is to fine-tune these models fully, it is a computationally heavy and time-consuming process that is not accessible to all. More importantly, with the models having billions of parameters, fully fine-tuning the models for new tasks or domains is infeasible and inefficient. This technical briefing covers the alternative approach –Parameter Efficient Fine Tuning (PEFT), discussing the state-of-the-art techniques and reflecting on the few studies of using PEFT in Software Engineering and how changing the current PEFT architectures in natural language processing could enhance the performance for code-related tasks.

Thu 18 Apr

Displayed time zone: Lisbon change

14:00 - 15:30
Technical Briefings 5Technical Briefings at Eugénio de Andrade
Chair(s): Fatemeh Hendijani Fard University of British Columbia
14:00
90m
Paper
Technical Briefing on Parameter Efficient Fine-Tuning of (Large) Language Models for Code-Intelligence
Technical Briefings
Fatemeh Hendijani Fard University of British Columbia