Semantic Similarity Metrics for Evaluating Source Code Summarization
Source code summarization involves creating brief descriptions of source code in natural language. These descriptions are a key component of software documentation such as JavaDocs. Automatic code summarization is a prized target of software engineering research, due to the high value summaries have to programmers and the simultaneously high cost of writing and maintaining documentation by hand. Current work is almost all based on machine models trained via big data input. Large datasets of examples of code and summaries of that code are used to train an e.g. encoder-decoder neural model. Then the output predictions of the model are evaluated against a set of reference summaries. The input is code not seen by the model, and the prediction is compared to a reference. The means by which a prediction is compared to a reference is essentially word overlap, calculated via a metric such as BLEU or ROUGE. The problem with using word overlap is that not all words in a sentence have the same importance, and many words have synonyms. The result is that calculated similarity may not match the perceived similarity by human readers. In this paper, we conduct an experiment to measure the degree to which various word overlap metrics correlate to human-rated similarity of predicted and reference summaries. We evaluate alternatives based on current work in semantic similarity metrics and propose recommendations for evaluation of source code summarization.
Sun 15 MayDisplayed time zone: Eastern Time (US & Canada) change
21:30 - 22:20 | Session 1: SummarizationResearch at ICPC room Chair(s): Haipeng Cai Washington State University, USA | ||
21:30 7mTalk | PTM4Tag: Sharpening Tag Recommendation of Stack Overflow with Pre-trained Models Research Junda He Singapore Management University, Bowen Xu Singapore Management University, Zhou Yang Singapore Management University, DongGyun Han Singapore Management University, Chengran Yang Singapore Management University, David Lo Singapore Management University Media Attached | ||
21:37 7mTalk | GypSum: Learning Hybrid Representations for Code Summarization Research Yu Wang School of Data Science and Engineering, East China Normal University, Yu Dong School of Data Science and Engineering, East China Normal University, Xuesong Lu School of Data Science and Engineering, East China Normal University, Aoying Zhou East China Normal University DOI Pre-print Media Attached | ||
21:44 7mTalk | M2TS: Multi-Scale Multi-Modal Approach Based on Transformer for Source Code Summarization Research Media Attached | ||
21:51 7mTalk | Semantic Similarity Metrics for Evaluating Source Code Summarization Research Sakib Haque University of Notre Dame, Zachary Eberhart University of Notre Dame, Aakash Bansal University of Notre Dame, Collin McMillan University of Notre Dame Media Attached | ||
21:58 7mTalk | LAMNER: Code Comment Generation Using Character Language Model and Named Entity Recognition Research Rishab Sharma University of British Columbia, Fuxiang Chen University of British Columbia, Fatemeh Hendijani Fard University of British Columbia Pre-print Media Attached | ||
22:05 15mLive Q&A | Q&A-Paper Session 1 Research |