Evaluating Code Summarization Techniques: A New Metric and an Empirical Characterization
Several code summarization techniques have been proposed in the literature to automatically document a code snippet or a function. Ideally, software developers should be involved in assessing the quality of the generated summaries. However, in most cases, researchers rely on automatic evaluation metrics such as BLEU, ROUGE, and METEOR. These metrics are all based on the same assumption: The higher the textual similarity between the generated summary and a reference summary written by developers, the higher its quality. However, there are two reasons for which this assumption falls short: (i) reference summaries, e.g., code comments collected by mining software repositories, may be of low quality or even outdated; (ii) generated summaries, while using a different wording than a reference one, could be semantically equivalent to it, thus still being suitable to document the code snippet. In this paper, we perform a thorough empirical investigation on the complementarity of different types of metrics in capturing the quality of a generated summary. Also, we propose to address the limitations of existing metrics by considering a new dimension, capturing the extent to which the generated summary aligns with the semantics of the documented code snippet, independently from the reference summary. To this end, we present a new metric based on contrastive learning to capture said aspect. We empirically show that the inclusion of this novel dimension enables a more effective representation of developers’ evaluations regarding the quality of automatically generated summaries
Wed 17 AprDisplayed time zone: Lisbon change
11:00 - 12:30 | |||
11:00 15mTalk | Prism: Decomposing Program Semantics for Code Clone Detection through Compilation Research Track Haoran Li Nankai university, wangsiqian Nankai university, Weihong Quan Nankai university, Xiaoli Gong Nankai University, Huayou Su NUDT, Jin Zhang Hunan Normal University | ||
11:15 15mTalk | Evaluating Code Summarization Techniques: A New Metric and an Empirical Characterization Research Track Antonio Mastropaolo Università della Svizzera italiana, Matteo Ciniselli Università della Svizzera Italiana, Massimiliano Di Penta University of Sannio, Italy, Gabriele Bavota Software Institute @ Università della Svizzera Italiana | ||
11:30 15mTalk | Are Prompt Engineering and TODO Comments Friends or Foes? An Evaluation on GitHub Copilot Research Track David OBrien Iowa State University, Sumon Biswas Carnegie Mellon University, Sayem Mohammad Imtiaz Iowa State University, Rabe Abdalkareem Omar Al-Mukhtar University, Emad Shihab Concordia University, Hridesh Rajan Iowa State University | ||
11:45 15mTalk | Automatic Semantic Augmentation of Language Model Prompts (for Code Summarization) Research Track Toufique Ahmed University of California at Davis, Kunal Suresh Pai UC Davis, Prem Devanbu University of California at Davis, Earl T. Barr University College London DOI Pre-print | ||
12:00 15mTalk | DSFM: Enhancing Functional Code Clone Detection with Deep Subtree Interactions Research Track Zhiwei Xu Tsinghua University, Shaohua Qiang Tsinghua University, Dinghong Song Tsinghua University, Min Zhou Tsinghua University, Hai Wan Tsinghua University, Xibin Zhao Tsinghua University, Ping Luo Tsinghua University, Hongyu Zhang Chongqing University | ||
12:15 15mTalk | Machine Learning is All You Need: A Simple Token-based Approach for Effective Code Clone Detection Research Track Siyue Feng Huazhong University of Science and Technology, Wenqi Suo Huazhong University of Science and Technology, Yueming Wu Nanyang Technological University, Deqing Zou Huazhong University of Science and Technology, Yang Liu Nanyang Technological University, Hai Jin Huazhong University of Science and Technology |