A Taxonomy of System-Level Attacks on Deep Learning Models in Autonomous Vehicles
The advent of deep learning and its astonishing performance has enabled its usage in complex systems, including autonomous vehicles. On the other hand, deep learning models are susceptible to mis-predictions when small, adversarial changes are introduced into their input. Such mis-predictions can be triggered in the real world and can result in a failure of the entire system. In recent years, a growing number of research works have investigated ways to mount attacks against autonomous vehicles that exploit deep learning components. Such attacks are directed toward elements of the environment where these systems operate and their effectiveness is assessed in terms of system-level failures triggered by them. There has been however no systematic attempt to analyze and categorize such attacks. In this paper, we present the first taxonomy of system-level attacks against autonomous vehicles. We constructed our taxonomy by selecting 21 highly relevant papers, then we tagged them with 12 top-level taxonomy categories and several sub-categories. The taxonomy allowed us to investigate the attack features, the most attacked components and systems, the underlying threat models, and the failure chains from input perturbation to system-level failure. We distilled several lessons for practitioners and identified possible directions for future work for researchers.
Fri 17 AprDisplayed time zone: Brasilia, Distrito Federal, Brazil change
14:00 - 15:30 | Dependability and Security 10Journal-first Papers / New Ideas and Emerging Results (NIER) / Research Track at Oceania X Chair(s): Triet Le Adelaide University | ||
14:00 15mTalk | When Uncertainty Leads to Unsafety: Empirical Insights into the Role of Uncertainty in Unmanned Aerial Vehicle Safety Journal-first Papers Sajad Khatiri Università della Svizzera italiana and University of Bern, Fatemeh Mohammadi Amin Zurich University of Applied Sciences (ZHAW), Sebastiano Panichella University of Bern, Paolo Tonella USI Lugano | ||
14:15 15mTalk | Structural Causal World Models: Towards An Assurance Framework for Safety-Critical Systems and Safeguarded AI New Ideas and Emerging Results (NIER) Jie Zou Centre for Assuring Autonomy, University of York, UK, Simon Burton Centre for Assuring Autonomy, University of York, UK, Radu Calinescu University of York, UK, Ioannis Stefanakos University of York, Roger Rivett University of York | ||
14:30 15mTalk | Towards Verifiably Safe Tool Use for LLM Agents New Ideas and Emerging Results (NIER) Aarya Doshi Georgia Institute of Technology, Yining Hong Carnegie Mellon University, Congying Xu The Hong Kong University of Science and Technology, China, Eunsuk Kang Carnegie Mellon University, Alexandros Kapravelos NCSU, Christian Kästner Carnegie Mellon University | ||
14:45 15mTalk | A Taxonomy of System-Level Attacks on Deep Learning Models in Autonomous Vehicles Journal-first Papers Masoud Jamshidiyan Tehrani Università della Svizzera italiana, Jinhan Kim Università della Svizzera italiana, ROSMAEL ZIDANE LEKEUFACK FOULEFACK University of Trento, Alessandro Marchetto Università di Trento, Paolo Tonella USI Lugano | ||
15:00 15mTalk | Model Discovery and Graph Simulation: A Lightweight Gateway to Chaos Engineering New Ideas and Emerging Results (NIER) Anatoly Krasnovsky Department of Computer Science and Engineering, Innopolis University; MB3R Lab, 420500, Innopolis, Russia DOI Pre-print Media Attached File Attached | ||
15:15 15mTalk | Learning From Software Failures: A Case Study at a National Space Research Center Research Track Dharun Anandayuvaraj Purdue University, Tanmay Singla Purdue University, Zain Alabedin Haj Hammadeh German Aerospace Center (DLR), Andreas Lund German Aerospace Center (DLR), Alexandra Holloway Jet Propulsion Laboratory (JPL), James C. Davis Purdue University DOI Pre-print | ||