Large Language Models (LLMs) have gained much attention in the Software Engineering (SE) community, specifically for code-related tasks. Though a common approach is to fine-tune these models fully, it is a computationally heavy and time-consuming process that is not accessible to all. More importantly, with the models having billions of parameters, fully fine-tuning the models for new tasks or domains is infeasible and inefficient. This technical briefing covers the alternative approach –Parameter Efficient Fine Tuning (PEFT), discussing the state-of-the-art techniques and reflecting on the few studies of using PEFT in Software Engineering and how changing the current PEFT architectures in natural language processing could enhance the performance for code-related tasks.