Enhanced Prompting Framework for Code Summarization with Large Language Models
Code summarization is essential for enhancing the efficiency of software development, enabling developers to swiftly comprehend and maintain software projects. Recent efforts utilizing large language models (LLMs) for generating precise code summaries have shown promising performance, primarily due to their advanced generative capabilities. LLMs that employ continuous prompting techniques can explore a broader problem space, potentially unlocking greater capabilities. However, they also present specific challenges, particularly in aligning with task-specific situations—a strength of discrete prompts. Additionally, the inherent differences between programming languages and natural languages can complicate comprehension for LLMs, impacting the accuracy and relevance of the summaries in complex programming scenarios. These challenges may result in outputs that do not align with actual task needs, underscoring the necessity for further research to enhance the effectiveness of LLMs in code summarization. To address these limitations, we propose an enhanced prompting framework for Code Summarization with Large Language Models(EP4CS). Firstly, we design \textit{\textbf{Mapper}}, which undergoes pre-training on code corpora and facilitates the optimization and updating of prompt vectors based on the outputs of LLMs. Additionally, we developed a \textit{\textbf{Struct-Agent}} that enables LLMs to more accurately interpret the complex semantic structures of programming languages by in-depth analysis of their syntax and structural characteristics. Experimental results indicate that, compared to existing baseline methods, our enhanced prompting learning framework significantly improves performance while maintaining the same parameter scale. Specifically, our framework improves scores by 4.45%, 3.77%, and 10.32% on the standard machine translation evaluation metrics BLEU, METEOR, and ROUGE-L, respectively.
Thu 26 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
11:00 - 12:15 | |||
11:00 25mTalk | Enhanced Prompting Framework for Code Summarization with Large Language Models Research Papers Minying Fang Qingdao University of Science and Technology, Xing Yuan Qingdao University of Science and Technology, Yuying Li Qingdao University of Science and Technology, Haojie Li Qingdao University of Science and Technology, Chunrong Fang Nanjing University, Junwei Du Qingdao University of Science and Technology DOI | ||
11:25 25mTalk | CrossProbe: LLM-empowered Cross-Project Bug Detection for Deep Learning Frameworks Research Papers Hao Guan University of Queensland, Southern University of Science and Technology, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology DOI | ||
11:50 25mTalk | Safe4U: Identifying Unsound Safe Encapsulations of Unsafe Calls in Rust using LLMs Research Papers Huan Li Zhejiang University, China, Bei Wang Zhejiang University, China, Xing Hu Zhejiang University, Xin Xia Zhejiang University DOI |
This is the main event hall of Clarion Hotel, which will be used to host keynote talks and other plenary sessions. The FSE and ISSTA banquets will also happen in this room.
The room is just in front of the registration desk, on the other side of the main conference area. The two large doors with numbers “1” and “2” provide access to the Cosmos Hall.