ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

Code review is an essential process to ensure the quality of software that identifies potential software issues at an early stage of software development. Among all software issues, security issues are the most important to identify, as they can easily lead to severe software crashes and service disruptions. Recent research efforts have been devoted to automated approaches to reduce the manual efforts required in the secure code review process. Despite the progress, current automated approaches on secure code review, including static analysis, deep learning models, and prompting approaches, still face the challenges of limited precision and coverage, and a lack of comprehensive evaluation.

To mitigate these challenges, we propose iCodeReviewer, which is an automated secure code review approach based on large language models (LLMs). iCodeReviewer leverages a novel mixture-of-prompts architecture that incorporates many prompt experts to improve the coverage of security issues. Each prompt expert is a dynamic prompt pipeline to check the existence of a specific security issue. iCodeReviewer also implements an effective routing algorithm to activate only necessary prompt experts based on the code features in the input program, reducing the false positives induced by LLM hallucination. Experiment results in our internal dataset demonstrate the effectiveness of iCodeReviewer in security issue identification and localization with an F1 of 63.98%. The review comments generated by iCodeReviewer also achieve a high acceptance rate up to 84% when it is deployed in production environments.