SAS 2023
Sun 22 - Tue 24 October 2023 Cascais, Portugal
co-located with SPLASH 2023
Mon 23 Oct 2023 16:00 - 17:00 at Room I - Session 8 Chair(s): José Morales, Manuel Hermenegildo

Real-world adoption of deep neural networks (DNNs) in critical applications requires ensuring strong generalization beyond testing datasets. Unfortunately, the standard practice of measuring DNN performance on a finite set of test inputs cannot ensure DNN safety on inputs in the wild. In this talk, I will focus on how abstract interpretation can be leveraged to bridge this gap by building DNNs with strong generalization on an infinite set of unseen inputs. In the process, I will discuss some of our recent work for building trust and safety in diverse domains such as vision, systems, finance, and more. I will also describe a path toward making static analysis for DNNs more scalable, easy to develop, and accessible to DNN developers lacking formal backgrounds.

I am a tenure-track Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC). My research lies at the intersection of Machine Learning (ML), Formal Methods (FM), and Systems. My long-term goal is to construct intelligent computing systems with formal guarantees about their behavior and safety.

Mon 23 Oct

Displayed time zone: Lisbon change

16:00 - 17:30
Session 8SAS 2023 at Room I
Chair(s): José Morales IMDEA Software Institute, Manuel Hermenegildo Technical University of Madrid (UPM) and IMDEA Software Institute
Building Trust and Safety in Artificial Intelligence with Abstract InterpretationRemoteKeynote
SAS 2023
I: Gagandeep Singh University of Illinois at Urbana-Champaign; VMware Research
Radhia Cousot Award and PC report
SAS 2023
C: Manuel Hermenegildo Technical University of Madrid (UPM) and IMDEA Software Institute, C: José Morales IMDEA Software Institute