ICSE 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada
Tue 29 Apr 2025 14:50 - 15:05 at 207 - Session 3 Chair(s): Maximilian Poretschkin

AI agents have been boosted by large language models. AI agents can function as intelligent assistants and complete tasks on behalf of their users with access to tools and the ability to execute commands in their environments. Through studying and experiencing the workflow of typical AI agents, we have raised several concerns regarding their security. These potential vulnerabilities are not addressed by the frameworks used to build the agents, nor by research aimed at improving the agents. In this paper, we identify and describe these vulnerabilities in detail from a system security perspective, emphasizing their causes and severe effects. Furthermore, we introduce defense mechanisms corresponding to each vulnerability with design and experiments to evaluate their viability. Altogether, this paper contextualizes the security issues in the current development of AI agents and delineates methods to make AI agents safer and more reliable.

Tue 29 Apr

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Session 3RAIE at 207
Chair(s): Maximilian Poretschkin Fraunhofer IAIS & University of Bonn
14:00
50m
Keynote
Keynote 2 by David Lo
RAIE
K: David Lo Singapore Management University
14:50
15m
Talk
Security of AI Agents
RAIE
P: Yifeng He University of California, Davis, Ethan Wang University of California at Davis, Yuyang Rong University of California, Davis, Zifei Cheng University of California, Davis, Hao Chen University of California at Davis
15:05
15m
Talk
Raising AI Ethics Awareness: Insights from a Quiz-Based Workshop with Software Practitioners – An Experience Report
RAIE
Aastha Pant Monash University, P: Rashina Hoda Monash University, Paul McIntosh RMIT University