FORGE 2024
Sun 14 Apr 2024 Lisbon, Portugal
co-located with ICSE 2024

Neural Code Models (NCMs) are rapidly progressing from research prototypes to commercial developer tools. As such, understanding the capabilities and limitations of such models is becoming critical. However, the abilities of these models are typically measured using automated metrics that often only reveal a portion of their real-world performance. While, in general, the performance of NCMs appears promising, currently much is unknown about how such models arrive at decisions or whether practitioners trust NCMs’ outcomes. In this talk, I will introduce doCode, a post hoc interpretability framework specific to NCMs that can explain model predictions. doCode is based upon causal inference to enable programming language-oriented explanations. While the theoretical underpinnings of doCode are extensible to exploring different model properties, we provide a concrete instantiation that aims to mitigate the impact of spurious correlations by grounding explanations of model behavior in properties of programming languages. doCode can generate causal explanations based on Abstract Syntax Tree information and software engineering-based interventions. To demonstrate the practical benefit of doCode, I will present empirical results of using doCode for detecting confounding bias in NCMs.

Sun 14 Apr

Displayed time zone: Lisbon change

14:00 - 15:30
Keynote 2 & Properties of Foundation ModelsResearch Track / Keynotes at Luis de Freitas Branco
Chair(s): David Lo Singapore Management University, Feifei Niu University of Ottawa
14:00
40m
Keynote
Keynote 2: Towards an Interpretable Science of Deep Learning for Software Engineering: A Causal Inference View
Keynotes
Denys Poshyvanyk William & Mary
14:40
14m
Full-paper
Exploring the Impact of the Output Format on the Evaluation of Large Language Models for Code TranslationFull Paper
Research Track
Marcos Macedo Queen's University, Kingston, Ontario, Yuan Tian Queen's University, Kingston, Ontario, Filipe Cogo Centre for Software Excellence, Huawei Canada, Bram Adams Queen's University
Pre-print
14:54
7m
Short-paper
Is Attention All You Need? Toward a Conceptual Model for Social Awareness in Large Language ModelsNew Idea Paper
Research Track
Gianmario Voria University of Salerno, Gemma Catolino University of Salerno, Fabio Palomba University of Salerno
Pre-print
15:01
14m
Full-paper
An Exploratory Investigation into Code License Infringements in Large Language Model Training DatasetsFull Paper
Research Track
Jonathan Katzy Delft University of Technology, Răzvan Mihai Popescu Delft University of Technology, Arie van Deursen Delft University of Technology, Maliheh Izadi Delft University of Technology
15:15
15m
Other
Discussion
Research Track