iCodeReviewer: Improving Secure Code Review with Mixture of Prompts
This program is tentative and subject to change.
Code review is an essential process to ensure the quality of software that identifies potential software issues at an early stage of software development. Among all software issues, security issues are the most important to identify, as they can easily lead to severe software crashes and service disruptions. Recent research efforts have been devoted to automated approaches to reduce the manual efforts required in the secure code review process. Despite the progress, current automated approaches on secure code review, including static analysis, deep learning models, and prompting approaches, still face the challenges of limited precision and coverage, and a lack of comprehensive evaluation.
To mitigate these challenges, we propose iCodeReviewer, which is an automated secure code review approach based on large language models (LLMs). iCodeReviewer leverages a novel mixture-of-prompts architecture that incorporates many prompt experts to improve the coverage of security issues. Each prompt expert is a dynamic prompt pipeline to check the existence of a specific security issue. iCodeReviewer also implements an effective routing algorithm to activate only necessary prompt experts based on the code features in the input program, reducing the false positives induced by LLM hallucination. Experiment results in our internal dataset demonstrate the effectiveness of iCodeReviewer in security issue identification and localization with an F1 of 63.98%. The review comments generated by iCodeReviewer also achieve a high acceptance rate up to 84% when it is deployed in production environments.
This program is tentative and subject to change.
Mon 17 NovDisplayed time zone: Seoul change
16:00 - 17:00 | |||
16:00 10mTalk | SGCR: A Specification-Grounded Framework for Trustworthy LLM Code Review Industry Showcase Kai Wang HiThink Research, Bingcheng Mao HiThink Research, Shuai Jia HiThink Research, Yujie Ding HiThink Research, Dongming Han HiThink Research, Tianyi Ma HiThink Research, Bin Cao Zhejiang University of Technology | ||
16:10 10mTalk | What Types of Code Review Comments Do Developers Most Frequently Resolve? Industry Showcase Saul Goldman The University of Melbourne, Hong Yi Lin The University of Melbourne, Jirat Pasuksmit Atlassian, Patanamon Thongtanunam University of Melbourne, Kla Tantithamthavorn Monash University and Atlassian, Zhe Wang Institute of Computing Technology at Chinese Academy of Sciences; Zhongguancun Laboratory, Ruixiong Zhang Atlassian, Ali Behnaz Atlassian, Fan Jiang Atlassian, Michael Siers Atlassian, Ryan Jiang Atlassian, Mike Buller Atlassian, Minwoo Jeong Atlassian, Ming Wu Atlassian | ||
16:20 10mTalk | Vessel: A Taxonomy of Reproducibility Issues for Container Images NIER Track Kevin Pitstick Carnegie Mellon Software Engineering Institute, Alex Derr Carnegie Mellon Software Engineering Institute, Lihan Zhan Carnegie Mellon Software Engineering Institute, Sebastian Echeverria Carnegie Mellon Software Engineering Institute | ||
16:30 10mTalk | From Modules to Marketplaces: A Vision for Composable Capability Sharing Across Organizations NIER Track Wei-Ji Wang National Taiwan University & Chunghwa Telecom Laboratories | ||
16:40 10mTalk | Towards Automated Governance: A DSL for Human-Agent Collaboration in Software Projects NIER Track Adem Ait University of Luxembourg, Gwendal Jouneaux Luxembourg Institute of Science and Technology, Javier Luis Cánovas Izquierdo Universitat Oberta de Catalunya, Jordi Cabot Luxembourg Institute of Science and Technology Pre-print | ||
16:50 10mTalk | iCodeReviewer: Improving Secure Code Review with Mixture of Prompts Industry Showcase | ||