Security Debt in LLM Agent Applications: A Measurement Study of Vulnerabilities and Mitigation Trade-offs
The advantages of large language models (LLMs) in content comprehension and question answering have led to the rapid emergence of LLM agent. Developers across diverse domains are actively building their own agent applications (apps), as these apps can streamline workflows, boost efficiency, or deliver innovative solutions, thereby enhancing the competitiveness of their products. Agent apps are playing an increasingly important role in our daily lives. However, numerous serious vulnerabilities and security issues have been identified in these apps. To effectively manage future security risks, it is essential to systematically understand the unique characteristics of agent app vulnerabilities and their mitigation. In this paper, we present the first comprehensive study on the vulnerabilities of agent apps, the mitigation practices of app developers, and the associated challenges and trade-offs. We identify 14 types of vulnerabilities and 15 root causes across 7 components, based on an analysis of 221 real-world vulnerabilities. Our study further investigates developer reactions, evaluates the effectiveness of various mitigation strategies, and explores the practical challenges and inevitable trade-offs in vulnerability mitigation. Finally, we distill 12 key findings, discuss their implications for agent app developers, maintainers, and security researchers, and offer suggestions for future research directions.