ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Fri 19 Apr 2024 17:00 - 17:07 at Almada Negreiros - Language Models and Generated Code 4 Chair(s): Shin Yoo

Language models for code such as CodeBERT offer the capability to learn advanced source code representation, but their opacity poses barriers to understanding of captured properties. Recent attention analysis studies provide initial interpretability insights by focusing solely on attention weights rather than considering the wider context modeling of Transformers. This study aims to shed some light on the previously ignored factors of the attention mechanism beyond the attention weights. We conduct an initial empirical study analyzing both attention distributions and transformed representations in CodeBERT. Across two programming languages, Java and Python, we find that the scaled transformation norms of the input better capture syntactic structure compared to attention weights alone. Our analysis reveals characterization of how CodeBERT embeds syntactic code properties. The findings demonstrate the importance of incorporating factors beyond just attention weights for rigorously understanding neural code models. This lays the groundwork for developing more interpretable models and effective uses of attention mechanisms in program analysis.

Fri 19 Apr

Displayed time zone: Lisbon change

16:00 - 17:30
Language Models and Generated Code 4New Ideas and Emerging Results / Research Track at Almada Negreiros
Chair(s): Shin Yoo Korea Advanced Institute of Science and Technology
16:00
15m
Talk
Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code
Research Track
Rangeet Pan IBM Research, Ali Reza Ibrahimzada University of Illinois Urbana-Champaign, Rahul Krishna IBM Research, Divya Sankar IBM Research, Lambert Pouguem Wassi IBM Research, Michele Merler IBM Research, Boris Sobolev IBM Research, Raju Pavuluri IBM T.J. Watson Research Center, Saurabh Sinha IBM Research, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
DOI Pre-print Media Attached
16:15
15m
Talk
Traces of Memorisation in Large Language Models for Code
Research Track
Ali Al-Kaswan Delft University of Technology, Netherlands, Maliheh Izadi Delft University of Technology, Arie van Deursen Delft University of Technology
Pre-print
16:30
15m
Talk
Language Models for Code Completion: A Practical Evaluation
Research Track
Maliheh Izadi Delft University of Technology, Jonathan Katzy Delft University of Technology, Tim van Dam Delft University of Technology, Marc Otten Delft University of Technology, Răzvan Mihai Popescu Delft University of Technology, Arie van Deursen Delft University of Technology
Pre-print
16:45
15m
Talk
Evaluating Large Language Models in Class-Level Code Generation
Research Track
Xueying Du Fudan University, Mingwei Liu Fudan University, Kaixin Wang Fudan University, Hanlin Wang Fudan University, Junwei Liu Huazhong University of Science and Technology, Yixuan Chen Fudan University, Jiayi Feng Fudan University, Chaofeng Sha Fudan University, Xin Peng Fudan University, Yiling Lou Fudan University
Pre-print
17:00
7m
Talk
Naturalness of Attention: Revisiting Attention in Code Language Models
New Ideas and Emerging Results
Mootez Saad Dalhousie University, Tushar Sharma Dalhousie University
Pre-print
17:07
7m
Talk
Towards Trustworthy AI Software Development Assistance
New Ideas and Emerging Results
Daniel Maninger TU Darmstadt, Krishna Narasimhan TU Darmstadt, Mira Mezini TU Darmstadt
DOI Pre-print