INTENTFIX: Automated Logic Vulnerability Repair via LLM-Driven Intent Modeling
Logic vulnerabilities, which arise from semantic gaps between a developer’s intent and the actual code, represent a critical and growing challenge in software security. Unlike syntactic bugs, these vulnerabilities pass traditional testing while harboring critical security flaws that can lead to severe breaches. We introduce INTENTFIX, a novel framework that automatically repairs logic vulnerabilities through intent-centric security analysis. INTENTFIX first leverages a Large Language Model (LLM) to systematically extract and formalize the developer’s implicit intent into a structured model. It then performs a differential analysis between this intent model and the implementation to precisely identify semantic gaps. Finally, it synthesizes and refines a patch through a multi-aspect, LLM-driven reasoning process. We evaluated INTENTFIX on a comprehensive dataset of 1,107 real-world CVE cases across five vulnerability types and 19 programming languages, achieving a patch accuracy of 64.5%, a 1.97× improvement over strong Chain-of-Thought (CoT) LLM baselines. This work makes three primary contributions: (1) We formalize intent-centric analysis as a new theoretical foundation for logic vulnerability repair. (2) We introduce a novel, structured framework that effectively harnesses an LLM’s reasoning capabilities for security. (3) We provide extensive empirical evidence validating our approach and offering new insights into the role of context in automated program repair.