SAS 2021
Sun 17 - Fri 22 October 2021 Chicago, Illinois, United States
co-located with SPLASH 2021
Sun 17 Oct 2021 09:15 - 09:30 at Zurich B - Session 1A Chair(s): Cezara Drăgoi
Sun 17 Oct 2021 17:15 - 17:30 at Zurich B - Session 1A Chair(s): Kedar Namjoshi

Motivated by the need to reliably characterize the robustness of deep neural networks, researchers have developed neural network verification algorithms. The verifiers aspire to answer whether certain properties are guaranteed with respect to all inputs in a space for a neural network. However, many verifiers inaccurately model floating point arithmetic but do not fully discuss the consequences.

We show that the negligence of floating point error leads to unsound verification that can be systematically exploited in practice. For a pretrained neural network, we present a method that efficiently searches inputs as witnesses for the incorrectness of robustness claims made by a complete verifier. We also present a method to construct neural network architectures and weights that induce wrong results of an incomplete verifier. Our results highlight that, to achieve practically reliable verification of neural networks, any verification system must accurately (or conservatively) model the effects of any floating point computations in the network inference or verification system.

Sun 17 Oct

Displayed time zone: Central Time (US & Canada) change

09:00 - 10:20
Session 1ASAS at Zurich B +8h
Chair(s): Cezara Drăgoi Inria / ENS / Informal Systems
09:00
15m
Talk
Accelerating Program Analyses in Datalog by Merging Library FactsVirtual
SAS
Yifan Chen Peking University, Chenyang Yang , Xin Zhang Peking University, Yingfei Xiong Peking University, Hao Tang Peking University, Xiaoyin Wang University of Texas at San Antonio, Lu Zhang Peking University
09:15
15m
Talk
Exploiting Verified Neural Networks via Floating Point Numerical ErrorVirtual
SAS
Kai Jia Massachusetts Institute of Technology, Martin C. Rinard
Pre-print
09:30
15m
Talk
Verifying Low-dimensional Input Neural Networks via Input QuantizationVirtual
SAS
Kai Jia Massachusetts Institute of Technology, Martin C. Rinard Massachusetts Institute of Technology
Pre-print
09:45
15m
Talk
A Multi-Language Static Analysis of Python Programs with Native C ExtensionsVirtual
SAS
Raphaël Monat Sorbonne Université — LIP6, Abdelraouf Ouadjaout Sorbonne Université, Antoine Miné Sorbonne Université
Pre-print Media Attached
10:00
20m
Live Q&A
Session 1A Discussion, Questions and Answers Virtual
SAS

17:00 - 18:20
Session 1ASAS at Zurich B
Chair(s): Kedar Namjoshi Nokia Bell Labs
17:00
15m
Talk
Accelerating Program Analyses in Datalog by Merging Library FactsVirtual
SAS
Yifan Chen Peking University, Chenyang Yang , Xin Zhang Peking University, Yingfei Xiong Peking University, Hao Tang Peking University, Xiaoyin Wang University of Texas at San Antonio, Lu Zhang Peking University
17:15
15m
Talk
Exploiting Verified Neural Networks via Floating Point Numerical ErrorVirtual
SAS
Kai Jia Massachusetts Institute of Technology, Martin C. Rinard
Pre-print
17:30
15m
Talk
Verifying Low-dimensional Input Neural Networks via Input QuantizationVirtual
SAS
Kai Jia Massachusetts Institute of Technology, Martin C. Rinard Massachusetts Institute of Technology
Pre-print
17:45
15m
Talk
A Multi-Language Static Analysis of Python Programs with Native C ExtensionsVirtual
SAS
Raphaël Monat Sorbonne Université — LIP6, Abdelraouf Ouadjaout Sorbonne Université, Antoine Miné Sorbonne Université
Pre-print Media Attached
18:00
20m
Live Q&A
Session 1A Discussion, Questions and Answers Virtual
SAS