ESEIW 2022
Sun 18 - Fri 23 September 2022 Helsinki, Finland
Fri 23 Sep 2022 13:45 - 14:00 at Sonck - Session 5B - Development & Testing & Behavioral 2 Chair(s): Sheila Reinehr

Background: Code summarization automatically generates the corresponding natural language descriptions according to the input code to characterize the function implemented by source code. Comprehensiveness of code representation is critical to code summarization task. However, most existing approaches typically use coarse-grained fusion methods to integrate multi-modal features. They generally represent different modalities of a piece of code, such as an Abstract Syntax Tree (AST) and a token sequence, as two embeddings and then fuse the two ones at the AST/code levels. Such a coarse integration makes it difficult to learn the correlations between fine-grained code elements across modalities effectively.

Aims: This study intends to improve the model’s prediction performance for high-quality code summarization by accurately aligning and fully fusing semantic and syntactic structure information of source code at node/token levels.

Method: This paper proposes a Multi-Modal Fine-grained Feature Fusion approach (MMF3) for neural code summarization. The method uses the Transformer architecture. In particular, we introduce a novel fine-grained fusion method, which allows fine-grained fusion of multiple code modalities at the token and node levels. Specifically, we use this method to fuse information from both token and AST modalities and apply the fused features to code summarization.

Results: We conduct experiments on one Java and one Python datasets, and evaluate generated summaries using four metrics. The results show that: 1) the performance of our model outperforms the current state-of-the-art models, and 2) the ablation experiments show that our proposed fine-grained fusion method can effectively improve the accuracy of generated summaries.

Conclusion: MMF3 can mine the relationships between crossmodal elements and perform accurate fine-grained element-level alignment fusion accordingly. As a result, more clues can be provided to improve the accuracy of the generated code summaries.


Fri 23 Sep

Displayed time zone: Athens change

13:30 - 15:00
Session 5B - Development & Testing & Behavioral 2ESEM Technical Papers at Sonck
Chair(s): Sheila Reinehr Pontifícia Universidade Católica do Paraná (PUCPR)
13:30
15m
Full-paper
Potential Technical Debt and Its Resolution in Code Reviews: An Exploratory Study of the OpenStack and Qt Communities
ESEM Technical Papers
Liming Fu Wuhan University, Peng Liang Wuhan University, China, Zeeshan Rasheed Wuhan University, Zengyang Li Central China Normal University, Amjed Tahir Massey University, Xiaofeng Han Wuhan University, China
Link to publication DOI Pre-print
13:45
15m
Full-paper
MMF3: Neural Code Summarization Based on Multi-Modal Fine-Grained Feature Fusion
ESEM Technical Papers
Zheng Ma Shandong Normal University, Yuexiu Gao Shandong Normal University, Lei Lyu Shandong Normal University, Chen Lyu Shandong Normal University
14:00
15m
Full-paper
PG-VulNet: Detect Supply Chain Vulnerabilities in IoT Devices using Pseudo-code and Graphs
ESEM Technical Papers
Xin Liu Lanzhou University, Yixiong Wu Institute for Network Science and Cyberspace of Tsinghua University, Qingchen Yu Zhejiang University, Shangru Song Beijing Institute of Technology, Yue Liu Southeast University; Qi An Xin Group Corp., Qingguo Zhou Lanzhou University, Jianwei Zhuge Tsinghua University
14:15
15m
Full-paper
Heterogeneous Graph Neural Networks for Software Effort Estimation
ESEM Technical Papers
Hung Phan Iowa State University, Ali Jannesari Iowa State University
Pre-print
14:30
15m
Full-paper
Meetings and Mood - Related or Not? Insights from Student Software Projects
ESEM Technical Papers
Jil Klünder Leibniz Universität Hannover, Oliver Karras TIB - Leibniz Information Centre for Science and Technology
Pre-print
14:45
15m
Full-paper
A Tale of Two Tasks: Automated Issue Priority Prediction with Deep Multi-task Learning
ESEM Technical Papers
Yingling Li , Xing Che , Yuekai Huang Institute of Software, Chinese Academy of Sciences, Junjie Wang Institute of Software at Chinese Academy of Sciences, Song Wang York University, Yawen Wang Institute of Software, Chinese Academy of Sciences, Qing Wang Institute of Software at Chinese Academy of Sciences