Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study
Preliminary security risk analysis (PSRA) provides a quick approach to identify, evaluate and propose remeditation to potential risks in specific scenarios. The extensive expertise required for an effective PSRA and the substantial ammount of textual-related tasks hinder quick assessments in mission-critical contexts, where timely and prompt actions are essential. The speed and accuracy of human experts in PSRA significantly impact response time. A large language model can quickly summarise information in less time than a human. To our knowledge, no prior study has explored the capabilities of fine-tuned models (FTM) in PSRA. Our case study investigates the proficiency of FTM to assist practitioners in PSRA. We manually curated 141 representative samples from over 50 mission-critical analyses archived by the industrial context team in the last five years.We compared the proficiency of the FTM versus seven human experts. Within the industrial context, our approach has proven successful in reducing errors in PSRA, hastening security risk detection, and minimizing false positives and negatives. This translates to cost savings for the company by averting unnecessary expenses associated with implementing unwarranted countermeasures. Therefore, experts can focus on more comprehensive risk analysis, leveraging LLMs for an effective preliminary assessment within a condensed timeframe.
Thu 20 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
16:00 - 17:15 | Security (2)Research Papers / Industry at Room Vietri Chair(s): Muhammad Ali Babar School of Computer Science, The University of Adelaide | ||
16:00 15mTalk | VulDL: Tree-based and Graph-based Neural Networks for Vulnerability Detection and Localization Research Papers Jingzheng Wu Institute of Software, The Chinese Academy of Sciences, Xiang Ling Institute of Software, Chinese Academy of Sciences, Xu Duan Institute of Software, Chinese Academy of Sciences, Tianyue Luo Institute of Software, Chinese Academy of Sciences, Mutian Yang Institute of Software, Chinese Academy of Sciences | ||
16:15 15mTalk | How the Training Procedure Impacts the Performance of Deep Learning-based Vulnerability Patching Research Papers Antonio Mastropaolo William and Mary, USA, Vittoria Nardone University of Molise, Gabriele Bavota Software Institute @ Università della Svizzera Italiana, Massimiliano Di Penta University of Sannio, Italy | ||
16:30 15mTalk | Reality Check: Assessing GPT-4 in Fixing Real-World Software Vulnerabilities Research Papers Zoltán Ságodi University of Szeged, Gabor Antal University of Szeged, Bence Bogenfürst University of Szeged, Martin Isztin University of Szeged, Peter Hegedus University of Szeged, Rudolf Ferenc University of Szeged | ||
16:45 15mTalk | Does trainer gender make a difference when delivering phishing training? A new experimental design to capture bias Research Papers André Palheiros Da Silva Vrije Universiteit, Winnie Bahati Mbaka Vrije Universiteit, Johann Mayer University of Twente, Jan-Willem Bullee University of Twente, Katja Tuma Vrije Universiteit Amsterdam | ||
17:00 15mTalk | Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study Industry Matteo Esposito University of Rome Tor Vergata, Francesco Palagiano Multitel di Lerede Alessandro & C. s.a.s. DOI Pre-print |