Tutorial: Beyond Code Generation: Evaluating and Improving LLMs for Code Intelligence
Abstract:
Large Language Models (LLMs) have demonstrated remarkable capabilities in code generation, but how well do they support broader aspects of code intelligence, such as comprehension and effective communication? This tutorial explores the limitations and advancements in LLM-based code intelligence, focusing on benchmarking, retrieval-augmented generation (RAG), Agent-LLMs, and model improvement strategies.
We will begin by discussing evaluation methodologies, to highlight gaps in reasoning, correctness, and communication in LLM-generated code. Next, we will examine techniques for improving developer support, including the integration of retrieval-augmented generation and agentic workflows. The session will conclude with a discussion on open challenges and future directions, equipping attendees with strategies to enhance LLM-driven code assistance.
Through this tutorial, attendees will gain a deeper understanding of capabilities of LLMs for code, identifying their weaknesses, and leveraging augmentation techniques to improve their reliability and usability in software engineering workflows.

Prof. Fatemeh Hendijani Fard
Dr. Fard is an Assistant Professor at the University of British Columbia (Okanagan Campus). Her research interest lies at the intersection of Natural Language Processing and Software Engineering. Dr. Fard and her team develop code intelligence models focusing on low-resource languages with less computational costs. Few-shot learning, adapters, and (large) language models are at the heart of her works. Her research is an initiative for Diversity and Inclusion to make the benefits of the automated tools and advancements of deep neural networks accessible to the communities of understudied programming languages and those with restricted GPU access.
Dr. Fard teaches at the Master of Data Science Program, is a member of CITECH program and MMRI, is part of the Killam family of scholars and is an IEEE and ACM member. She strongly advocates Diversity and Inclusion, specifically for underrepresented females in STEM. Further details can be found on:
https://cmps.ok.ubc.ca/about/contact/fatemeh-hendijani-fard/
Dates
Mon 28 Apr 2025
Tracks
FORGE Data and Benchmarking
FORGE Industry Papers
FORGE Keynotes
FORGE Panel
FORGE Research Papers
FORGE Tutorials
This program is tentative and subject to change.
Mon 28 AprDisplayed time zone: Eastern Time (US & Canada) change
Mon 28 Apr
Displayed time zone: Eastern Time (US & Canada) change
16:00 - 17:30 | |||
16:36 45mTutorial | Beyond Code Generation: Evaluating and Improving LLMs for Code Intelligence Tutorials Fatemeh Hendijani Fard University of British Columbia |