Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs
Large Language Models (LLMs) are increasingly used in software development to generate functions—such as attack detectors—that implement security requirements. Ensuring that LLMs possess sufficient knowledge to address specific security requirements, including information about existing attacks, remains a key challenge. To tackle this, we propose an approach that integrates Retrieval Augmented Generation (RAG) and Self-Ranking into the LLM pipeline. RAG enhances the robustness of the output by incorporating external knowledge sources, while the Self-Ranking technique, inspired by the concept of Self-Consistency, generates multiple reasoning paths and creates ranks to select the most robust detector. Our extensive empirical study targets code generated by LLMs to detect two prevalent injection attacks in web security: Cross-Site Scripting (XSS) and SQL injection (SQLi). Results show a significant improvement in detection performance when employing RAG and Self-Ranking, with increases of up to 71%pt (average 37%pt) and up to 43%pt (average 6%pt) in F2-Score for XSS and SQLi detection, respectively.
Wed 15 AprDisplayed time zone: Brasilia, Distrito Federal, Brazil change
14:00 - 15:30 | Dependability and Security 2Research Track / Journal-first Papers / New Ideas and Emerging Results (NIER) at Oceania X Chair(s): Saeid Tizpaz-Niari University of Illinois Chicago | ||
14:00 15mTalk | TraceCaps: Inline Provenance and Risk Enforcement for Agentic Software Engineering New Ideas and Emerging Results (NIER) Andre Catarino Faculty of Engineering, University of Porto, Claudia Mamede Carnegie Mellon University, Rui Melo Carnegie Mellon University and Faculty of Engineering, University of Porto, Rui Maranhao Abreu University of Lisbon | ||
14:15 15mTalk | Can LLMs Hack Enterprise Networks? Autonomous Assumed Breach Penetration-Testing Active Directory Networks Journal-first Papers | ||
14:30 15mTalk | PenForge: On-the-Fly Expert Agent Construction for Automated Penetration Testing New Ideas and Emerging Results (NIER) Huihui Huang Singapore Management University, Singapore, Jieke Shi Singapore Management University, Junkai Chen Singapore Management University, Singapore, Ting Zhang Monash University, Yikun Li Singapore Management University, Chengran Yang Singapore Management University, Singapore, Eng Lieh Ouh Singapore Management University, Singapore, Lwin Khin Shar Singapore Management University, David Lo Singapore Management University | ||
14:45 15mTalk | Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs Journal-first Papers Samuele Pasini Università della Svizzera italiana, Jinhan Kim Università della Svizzera italiana, Tommaso Aiello SAP Security Research, Rocio Cabrera Lozoya SAP Security Research, Antonino Sabetta SAP, Paolo Tonella USI Lugano | ||
15:00 15mTalk | LLM4JMH: Studying the Use of LLMs for Generating Java Performance Microbenchmarks Research Track Zongxiong Chen Fraunhofer FOKUS, Derui Zhu Technical University of Munich, Kundi Yao Ontario Tech University, Weiyi Shang University of Waterloo, Jinfu Chen Wuhan University, Jiahui Geng Mohamed bin Zayed University of Artificial Intelligence, Alexander Pretschner TU Munich, Jens Grossklags Technical University of Munich, Manfred Hauswirth Fraunhofer FOKUS, Sonja Schimmler Fraunhofer FOKUS & TU Berlin | ||
15:15 15mTalk | RulePilot: An LLM-Powered Agent for Security Rule Generation Research Track Hongtai Wang National University of Singapore, Ming Xu Shanghai Jiao Tong University / National University of Singapore, Yanpei Guo National University of Singapore, Weili Han Fudan University, Hoon Wei Lim Cyber Special Ops-R&D, NCS Group, Jin Song Dong National University of Singapore | ||