Building Trust and Safety in Artificial Intelligence with Abstract InterpretationRemoteKeynote
Real-world adoption of deep neural networks (DNNs) in critical applications requires ensuring strong generalization beyond testing datasets. Unfortunately, the standard practice of measuring DNN performance on a finite set of test inputs cannot ensure DNN safety on inputs in the wild. In this talk, I will focus on how abstract interpretation can be leveraged to bridge this gap by building DNNs with strong generalization on an infinite set of unseen inputs. In the process, I will discuss some of our recent work for building trust and safety in diverse domains such as vision, systems, finance, and more. I will also describe a path toward making static analysis for DNNs more scalable, easy to develop, and accessible to DNN developers lacking formal backgrounds.
I am a tenure-track Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC). My research lies at the intersection of Machine Learning (ML), Formal Methods (FM), and Systems. My long-term goal is to construct intelligent computing systems with formal guarantees about their behavior and safety.
Mon 23 OctDisplayed time zone: Lisbon change
16:00 - 17:30 | Session 8SAS 2023 at Room I Chair(s): José Morales IMDEA Software Institute, Manuel Hermenegildo Technical University of Madrid (UPM) and IMDEA Software Institute | ||
16:00 60mKeynote | Building Trust and Safety in Artificial Intelligence with Abstract InterpretationRemoteKeynote SAS 2023 Pre-print | ||
17:00 30mAwards | Radhia Cousot Award and PC report SAS 2023 C: Manuel Hermenegildo Technical University of Madrid (UPM) and IMDEA Software Institute, C: José Morales IMDEA Software Institute |