Large Language Models for In-File Vulnerability Localization can be “Lost in the End”
Traditionally, software vulnerability detection research has focused on individual small functions due to earlier language processing technologies’ limitations in handling larger inputs. However, this function-level approach may miss bugs that span multiple functions and code blocks. Recent advancements in artificial intelligence have enabled processing of larger inputs, leading everyday software developers to increasingly rely on chat-based large language models (LLMs) like GPT-3.5 and GPT-4 to detect vulnerabilities across entire files, not just within functions. This new development practice requires researchers to urgently investigate whether commonly used LLMs can effectively analyze large file-sized inputs, in order to provide timely insights for software developers and engineers about the pros and cons of this emerging technological trend. Hence, the goal of this paper is to evaluate the effectiveness of several state-of-the-art chat-based LLMs, including the GPT models, in detecting in-file vulnerabilities. We conducted a costly investigation into how the performance of LLMs varies based on vulnerability type, input size, and vulnerability location within the file. To give enough statistical power (β ≥ .8) to our study, we could only focus on the three most common (as well as dangerous) vulnerabilities: XSS, SQL injection, and path traversal. Our findings indicate that the effectiveness of LLMs in detecting these vulnerabilities is strongly influenced by both the location of the vulnerability and the overall size of the input. Specifically, regardless of the vulnerability type, LLMs tend to significantly (p < .05) underperform when detecting vulnerabilities located toward the end of larger files—a pattern we call the ‘lost-in-the-end’ effect. Finally, to further support software developers and practitioners, we also explored the optimal input size for these LLMs and presented a simple strategy for identifying it, which can be applied to other models and vulnerability types. Eventually, we show how adjusting the input size can lead to significant improvements in LLM-based vulnerability detection, with an average recall increase of 32% across all models.
Tue 24 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:30 - 12:30 | Vulnerability 2Research Papers / Demonstrations at Pirsenteret 150 Chair(s): Xiaoxue Ren Zhejiang University | ||
10:30 20mTalk | Statement-level Adversarial Attack on Vulnerability Detection Models via Out-Of-Distribution Features Research Papers Xiaohu Du Huazhong University of Science and Technology, Ming Wen Huazhong University of Science and Technology, Haoyu Wang , Zichao Wei Huazhong University of Science and Technology, Hai Jin Huazhong University of Science and Technology DOI | ||
10:50 20mTalk | Large Language Models for In-File Vulnerability Localization can be “Lost in the End” Research Papers Francesco Sovrano Collegium Helveticum, ETH Zurich, Switzerland; Department of Informatics, University of Zurich, Switzerland, Adam Bauer University of Zurich, Alberto Bacchelli University of Zurich DOI | ||
11:10 20mTalk | One-for-All Does Not Work! Enhancing Vulnerability Detection by Mixture-of-Experts (MoE) Research Papers Xu Yang University of Manitoba, Shaowei Wang University of Manitoba, Jiayuan Zhou Huawei, Wenhan Zhu Huawei Canada DOI | ||
11:30 20mTalk | Gleipner: A Benchmark for Gadget Chain Detection in Java Deserialization Vulnerabilities Research Papers DOI | ||
11:50 10mTalk | BinPool: A Dataset of Vulnerabilities for Binary Security Analysis Demonstrations Sima Arasteh University of Southern California, Georgios Nikitopoulos Dartmouth College, University of Thessaly, Wei-Cheng Wu Dartmouth College, Nicolaas Weideman USC Information Sciences Institute, Aaron Portnoy Dartmouth College, Mukund Raghothaman University of Southern California, Christophe Hauser Dartmouth College | ||
12:00 20mTalk | Today's cat is tomorrow's dog: accounting for time-based changes in the labels of ML vulnerability detection approaches Research Papers Ranindya Paramitha University of Trento, Yuan Feng , Fabio Massacci University of Trento; Vrije Universiteit Amsterdam DOI Pre-print | ||
12:20 10mTalk | KAVe: A Tool to Detect XSS and SQLi Vulnerabilities using a Multi-Agent System over a Multi-Layer Knowledge Graph Demonstrations Rafael Ramires LASIGE, DI, Faculdade de Ciencias da Universidade de Lisboa, Ana Respício LASIGE, DI, Faculdade de Ciencias da Universidade de Lisboa, Ibéria Medeiros LaSIGE, Faculdade de Ciências da Universidade de Lisboa, Mike Papadakis University of Luxembourg |
This room is located outside Clarion Hotel
This room is located in the Pirsenteret (The Pier Center) convention center. It is just outside the hotel, on the back, towards the fjord.
You should be able to go through the emergency exit at Clarion, just on the side of the Cosmos 3 wing, which will be bring you close to Pirsenteret.
The entrance to the center is from here:
https://maps.app.goo.gl/dU3qH6kAimXGBNHe7
Once inside, go all straight and you will find signage to reach the room. The room is known as room 150 inside the center.