ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

Static analysis plays a crucial role in software vulnerability detection, yet faces a persistent precision-scalability trade-off. In large codebases like the Linux kernel, traditional static analysis tools often generate excessive false positives due to simplified vulnerability modeling and over-approximation of path and data constraints. While Large Language Models (LLMs) demonstrate promising code understanding capabilities, their direct application to program analysis remains unreliable due to inherent reasoning limitations.

We introduce BugLens, a post-refinement framework that significantly enhances static analysis precision for bug detection. BugLens guides LLMs through structured reasoning steps to assess security impact and validate constraints from the source code. When evaluated on Linux kernel’s taint-style bugs detected by static analysis tools, BugLens improves precision approximately 7-fold (from 0.10 to 0.72), substantially reducing false positives while uncovering four previously unreported vulnerabilities. Our results demonstrate that a well-structured, fully-automated LLM-based workflow can effectively complement and enhance traditional static analysis techniques.