ISSTA 2025
Wed 25 - Sat 28 June 2025 Trondheim, Norway
co-located with FSE 2025
Thu 26 Jun 2025 11:00 - 11:25 at Cosmos Hall - LLM-based Code Analysis Chair(s): Zan Wang

Code summarization is essential for enhancing the efficiency of software development, enabling developers to swiftly comprehend and maintain software projects. Recent efforts utilizing large language models (LLMs) for generating precise code summaries have shown promising performance, primarily due to their advanced generative capabilities. LLMs that employ continuous prompting techniques can explore a broader problem space, potentially unlocking greater capabilities. However, they also present specific challenges, particularly in aligning with task-specific situations—a strength of discrete prompts. Additionally, the inherent differences between programming languages and natural languages can complicate comprehension for LLMs, impacting the accuracy and relevance of the summaries in complex programming scenarios. These challenges may result in outputs that do not align with actual task needs, underscoring the necessity for further research to enhance the effectiveness of LLMs in code summarization. To address these limitations, we propose an enhanced prompting framework for Code Summarization with Large Language Models(EP4CS). Firstly, we design \textit{\textbf{Mapper}}, which undergoes pre-training on code corpora and facilitates the optimization and updating of prompt vectors based on the outputs of LLMs. Additionally, we developed a \textit{\textbf{Struct-Agent}} that enables LLMs to more accurately interpret the complex semantic structures of programming languages by in-depth analysis of their syntax and structural characteristics. Experimental results indicate that, compared to existing baseline methods, our enhanced prompting learning framework significantly improves performance while maintaining the same parameter scale. Specifically, our framework improves scores by 4.45%, 3.77%, and 10.32% on the standard machine translation evaluation metrics BLEU, METEOR, and ROUGE-L, respectively.

Thu 26 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:15
LLM-based Code AnalysisResearch Papers at Cosmos Hall
Chair(s): Zan Wang Tianjin University
11:00
25m
Talk
Enhanced Prompting Framework for Code Summarization with Large Language Models
Research Papers
Minying Fang Qingdao University of Science and Technology, Xing Yuan Qingdao University of Science and Technology, Yuying Li Qingdao University of Science and Technology, Haojie Li Qingdao University of Science and Technology, Chunrong Fang Nanjing University, Junwei Du Qingdao University of Science and Technology
DOI
11:25
25m
Talk
CrossProbe: LLM-empowered Cross-Project Bug Detection for Deep Learning Frameworks
Research Papers
Hao Guan University of Queensland, Southern University of Science and Technology, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology
DOI
11:50
25m
Talk
Safe4U: Identifying Unsound Safe Encapsulations of Unsafe Calls in Rust using LLMs
Research Papers
Huan Li Zhejiang University, China, Bei Wang Zhejiang University, China, Xing Hu Zhejiang University, Xin Xia Zhejiang University
DOI

Information for Participants
Thu 26 Jun 2025 11:00 - 12:15 at Cosmos Hall - LLM-based Code Analysis Chair(s): Zan Wang
Info for room Cosmos Hall:

This is the main event hall of Clarion Hotel, which will be used to host keynote talks and other plenary sessions. The FSE and ISSTA banquets will also happen in this room.

The room is just in front of the registration desk, on the other side of the main conference area. The two large doors with numbers “1” and “2” provide access to the Cosmos Hall.

:
:
:
: