Building Trust and Safety in Artificial Intelligence with Abstract Interpretation
Real-world adoption of deep neural networks (DNNs) in critical applications requires ensuring strong generalization beyond testing datasets. Unfortunately, the standard practice of measuring DNN performance on a finite set of test inputs cannot ensure DNN safety on inputs in the wild. In this talk, I will focus on how abstract interpretation can be leveraged to bridge this gap by building DNNs with strong generalization on an infinite set of unseen inputs. In the process, I will discuss some of our recent work for building trust and safety in diverse domains such as vision, systems, finance, and more. I will also describe a path toward making static analysis for DNNs more scalable, easy to develop, and accessible to DNN developers lacking formal backgrounds.
I am a tenure-track Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC). My research lies at the intersection of Machine Learning (ML), Formal Methods (FM), and Systems. My long-term goal is to construct intelligent computing systems with formal guarantees about their behavior and safety.
Mon 23 OctDisplayed time zone: Lisbon change
16:00 - 17:30
|Building Trust and Safety in Artificial Intelligence with Abstract InterpretationRemoteKeynote|
I: Gagandeep Singh University of Illinois at Urbana-Champaign; VMware ResearchPre-print
|Radhia Cousot Award and PC report|