Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis
We present a comparative analysis of open-source tools that scan conversational large language models (LLMs) for vulnerabilities, in short - \emph{scanners}. As LLMs become integral to various applications, they also present potential attack surfaces, exposed to security risks such as information leakage and jailbreak attacks. AI red-teaming, adapted from traditional cybersecurity, is recognized by governments and companies as essential - often emphasizing the challenge of continuously evolving threats. Our study evaluates prominent, cutting-edge scanners - Garak, Giskard, PyRIT, and CyberSecEval - that address this challenge by automating red-teaming processes. We detail the distinctive features and practical use of these scanners, outline unifying principles of their design and perform quantitative evaluations to compare them. These evaluations uncover significant reliability issues in detecting successful attacks, highlighting a fundamental gap for future development. Additionally, we contribute a foundational labeled dataset, which serves as an initial step to bridge this gap. Based on the above, we provide suggestions for future regulations and standardization, as well as strategic recommendations to assist organizations in scanner selection, considering customizability, test-suite comprehensiveness and industry-specific use cases.