Enhancing Vulnerability Detection via Inter-procedural Semantic Completion
Inspired by advances in deep learning, numerous learning-based approaches for vulnerability detection have emerged, primarily operating at the function level for scalability. However, this design choice has a critical limitation: many vulnerabilities span multiple functions, causing function-level approaches to lose the semantics of called functions and fail to capture true vulnerability patterns. To address this issue, we propose VulnSC, a novel framework designed to enhance learning-based approaches by complementing inter-procedural semantics. VulnSC retrieves the source code of called functions for datasets and leverages large language models (LLMs) with well-designed prompts to generate summaries for these functions. The datasets, enhanced with these summaries, are fed into neural networks for improved vulnerability detection. VulnSC is the first framework to integrate inter-procedural semantics into learning-based approaches while maintaining scalability. We evaluate VulnSC on four state-of-the-art learning-based approaches using a widely used dataset, and our experimental results demonstrate that VulnSC significantly enhances detection performance with minimal additional computational overhead.