M2TS: Multi-Scale Multi-Modal Approach Based on Transformer for Source Code Summarization
Source code summarization aims to generate natural language descriptions of code snippets that can help developers comprehend the code and save time reading it. Many existing studies learn the syntactic and semantic knowledge of code snippets from their token sequences and Abstract Syntax Trees (ASTs). They use the learned code representations as input to code summarization models, which can accordingly generate summaries describing source code. Traditional models traverse ASTs as sequences or split ASTs into paths as input. However, the former loses the structural properties of ASTs, and the latter destroys the overall structure of ASTs. Therefore, comprehensively capturing the structural features of ASTs in learning code representations for source code summarization remains a challenging problem to be solved. In this paper, we propose M2TS, a Multi-scale Multi-modal approach based on Transformer for source code Summarization. M2TS uses a multi-scale AST feature extraction method, where multi-scale refers to the different power matrices obtained from the AST adjacency matrix. This method represents ASTs at multiple local and global levels, which can extract the structure of ASTs more completely and accurately. To complement missing semantic information in ASTs, we also obtain source code features, and further combine them with the extracted AST features using a cross modality fusion method that not only fuses the syntactic and contextual semantic information of source code, but also highlights the key features of each modality. In particular, cross modality fusion is a follow-up work to the multi-scale method, so the latter contributes to the former. We conduct experiments on two Java and one Python datasets, and the experimental results demonstrate that M2TS outperforms current state-of-the-art methods. We make our code publicly available at https://github.com/TranSMS/M2TS.
Sun 15 MayDisplayed time zone: Eastern Time (US & Canada) change
| 21:30 - 22:20 | Session 1: SummarizationResearch at ICPC room  Chair(s): Haipeng Cai Washington State University, USA | ||
| 21:307m Talk | PTM4Tag: Sharpening Tag Recommendation of Stack Overflow with Pre-trained Models Research Junda He Singapore Management University, Bowen Xu Singapore Management University, Zhou Yang Singapore Management University, DongGyun Han Singapore Management University, Chengran Yang Singapore Management University, David Lo Singapore Management UniversityMedia Attached | ||
| 21:377m Talk | GypSum: Learning Hybrid Representations for Code Summarization Research Yu Wang School of Data Science and Engineering, East China Normal University, Yu Dong School of Data Science and Engineering, East China Normal University, Xuesong Lu School of Data Science and Engineering, East China Normal University, Aoying Zhou East China Normal UniversityDOI Pre-print Media Attached | ||
| 21:447m Talk | M2TS: Multi-Scale Multi-Modal Approach Based on Transformer for Source Code Summarization ResearchMedia Attached | ||
| 21:517m Talk | Semantic Similarity Metrics for Evaluating Source Code Summarization Research Sakib Haque University of Notre Dame, Zachary Eberhart University of Notre Dame, Aakash Bansal University of Notre Dame, Collin McMillan University of Notre DameMedia Attached | ||
| 21:587m Talk | LAMNER: Code Comment Generation Using Character Language Model and Named Entity Recognition Research Rishab Sharma University of British Columbia, Fuxiang Chen University of British Columbia, Fatemeh Hendijani Fard University of British ColumbiaPre-print Media Attached | ||
| 22:0515m Live Q&A | Q&A-Paper Session 1 Research | ||
