ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Fri 19 Apr 2024 16:45 - 17:00 at Almada Negreiros - Language Models and Generated Code 4 Chair(s): Shin Yoo

Recently, many large language models (LLMs) have been proposed, showing advanced proficiency in code generation. Meanwhile, many efforts have been dedicated to evaluating LLMs on code generation benchmarks such as HumanEval. Although being very helpful for comparing different LLMs, existing evaluation focuses on a simple code generation scenario (i.e., function-level or statement-level code generation), which mainly asks LLMs to generate one single code unit (e.g., a function or a statement) for the given natural language description. Such evaluation focuses on generating independent and often small-scale code units, thus leaving it unclear how LLMs perform in real-world software development scenarios. To fill this knowledge gap, we make the first attempt to evaluate LLMs in a more challenging code generation scenario, i.e., class-level code generation. Compared with existing code generation benchmarks, it better reflects real-world software development scenarios due to it comprising broader contextual dependencies and multiple, interdependent units of code. We first manually construct the first class-level code generation benchmark ClassEval of 100 class-level Python code generation tasks with approximately 500 person-hours. We then perform the first study of 11 LLMs on class-level code generation based on ClassEval. We find that all LLMs perform much worse on class-level code generation compared to the method-level. While GPT models still dominate other LLMs on class-level code generation, the performance rankings of other models on method-level code generation no longer holds for class-level code generation. Besides, most models (except GPT models) perform better when generating the class method by method; and they have the limited ability of generating dependent code.

Fri 19 Apr

Displayed time zone: Lisbon change

16:00 - 17:30
Language Models and Generated Code 4New Ideas and Emerging Results / Research Track at Almada Negreiros
Chair(s): Shin Yoo Korea Advanced Institute of Science and Technology
16:00
15m
Talk
Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code
Research Track
Rangeet Pan IBM Research, Ali Reza Ibrahimzada University of Illinois Urbana-Champaign, Rahul Krishna IBM Research, Divya Sankar IBM Research, Lambert Pouguem Wassi IBM Research, Michele Merler IBM Research, Boris Sobolev IBM Research, Raju Pavuluri IBM T.J. Watson Research Center, Saurabh Sinha IBM Research, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
DOI Pre-print Media Attached
16:15
15m
Talk
Traces of Memorisation in Large Language Models for Code
Research Track
Ali Al-Kaswan Delft University of Technology, Netherlands, Maliheh Izadi Delft University of Technology, Arie van Deursen Delft University of Technology
Pre-print
16:30
15m
Talk
Language Models for Code Completion: A Practical Evaluation
Research Track
Maliheh Izadi Delft University of Technology, Jonathan Katzy Delft University of Technology, Tim van Dam Delft University of Technology, Marc Otten Delft University of Technology, Răzvan Mihai Popescu Delft University of Technology, Arie van Deursen Delft University of Technology
Pre-print
16:45
15m
Talk
Evaluating Large Language Models in Class-Level Code Generation
Research Track
Xueying Du Fudan University, Mingwei Liu Fudan University, Kaixin Wang Fudan University, Hanlin Wang Fudan University, Junwei Liu Huazhong University of Science and Technology, Yixuan Chen Fudan University, Jiayi Feng Fudan University, Chaofeng Sha Fudan University, Xin Peng Fudan University, Yiling Lou Fudan University
Pre-print
17:00
7m
Talk
Naturalness of Attention: Revisiting Attention in Code Language Models
New Ideas and Emerging Results
Mootez Saad Dalhousie University, Tushar Sharma Dalhousie University
Pre-print
17:07
7m
Talk
Towards Trustworthy AI Software Development Assistance
New Ideas and Emerging Results
Daniel Maninger TU Darmstadt, Krishna Narasimhan TU Darmstadt, Mira Mezini TU Darmstadt
DOI Pre-print