ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Wed 17 Apr 2024 11:45 - 12:00 at Amália Rodrigues - Evolution & AI Chair(s): Oscar Chaparro

Large Language Models (LLM) are a new class of computation engines, “programmed” via prompt engineering. Researchers are still learning how to best “program” these LLMs to help developers. We start with the intuition that developers tend to consciously and unconsciously collect semantics facts, from the code, while working. Mostly these are shallow, simple facts arising from a quick read. For a function, such facts might include parameter and local variable names, return expressions, simple pre- and post-conditions, and basic control and data flow, etc. One might assume that the powerful multi-layer architecture of transformer-style LLMs makes them implicitly capable of doing this simple level of “code analysis” and extracting such information, while processing code: but are they, really? If they aren’t, could explicitly adding this information help? Our goal here is to investigate this question, using the code summarization task and evaluate whether automatically augmenting an LLM’s prompt with semantic facts explicitly, actually helps. Prior work shows that LLM performance on code summarization benefits from embedding a few code & summary exemplars in the prompt, before the code to be summarized. While summarization performance has steadily progressed since the early days, there is still room for improvement: LLM performance on code summarization still lags its performance on natural-language tasks like translation and text summarization. We find that adding semantic facts to the code in the prompt actually does help! This approach improves performance in several different settings suggested by prior work, including for three different Large Language Models. In most cases, we see improvements, as measured by a range of commonly-used metrics; for the PHP language in the challenging CodeSearchNet dataset, this augmentation actually yields performance surpassing 30 BLEU. In addition, we have also found that including semantic facts yields a substantial enhancement in LLMs’ line completion performance.

Wed 17 Apr

Displayed time zone: Lisbon change

11:00 - 12:30
Evolution & AIResearch Track at Amália Rodrigues
Chair(s): Oscar Chaparro William & Mary
11:00
15m
Talk
Prism: Decomposing Program Semantics for Code Clone Detection through Compilation
Research Track
Haoran Li Nankai university, wangsiqian Nankai university, Weihong Quan Nankai university, Xiaoli Gong Nankai University, Huayou Su NUDT, Jin Zhang Hunan Normal University
11:15
15m
Talk
Evaluating Code Summarization Techniques: A New Metric and an Empirical Characterization
Research Track
Antonio Mastropaolo Università della Svizzera italiana, Matteo Ciniselli Università della Svizzera Italiana, Massimiliano Di Penta University of Sannio, Italy, Gabriele Bavota Software Institute @ Università della Svizzera Italiana
11:30
15m
Talk
Are Prompt Engineering and TODO Comments Friends or Foes? An Evaluation on GitHub Copilot
Research Track
David OBrien Iowa State University, Sumon Biswas Carnegie Mellon University, Sayem Mohammad Imtiaz Iowa State University, Rabe Abdalkareem Omar Al-Mukhtar University, Emad Shihab Concordia University, Hridesh Rajan Iowa State University
11:45
15m
Talk
Automatic Semantic Augmentation of Language Model Prompts (for Code Summarization)
Research Track
Toufique Ahmed University of California at Davis, Kunal Suresh Pai UC Davis, Prem Devanbu University of California at Davis, Earl T. Barr University College London
DOI Pre-print
12:00
15m
Talk
DSFM: Enhancing Functional Code Clone Detection with Deep Subtree Interactions
Research Track
Zhiwei Xu Tsinghua University, Shaohua Qiang Tsinghua University, Dinghong Song Tsinghua University, Min Zhou Tsinghua University, Hai Wan Tsinghua University, Xibin Zhao Tsinghua University, Ping Luo Tsinghua University, Hongyu Zhang Chongqing University
12:15
15m
Talk
Machine Learning is All You Need: A Simple Token-based Approach for Effective Code Clone Detection
Research Track
Siyue Feng Huazhong University of Science and Technology, Wenqi Suo Huazhong University of Science and Technology, Yueming Wu Nanyang Technological University, Deqing Zou Huazhong University of Science and Technology, Yang Liu Nanyang Technological University, Hai Jin Huazhong University of Science and Technology