Solution-aware vs global ReLU selection: partial MILP strikes back for DNN verification
This program is tentative and subject to change.
Branch and Bound (BaB) is considered as the most efficient technique for DNN verification: it can propagate bounds over numerous branches, to accurately under- and over-approximate values a given neuron can take even in large DNNs, enabling formal verification of properties such as local robustness. Nevertheless, the number of branches grows exponentially with important variables, and there are complex instances for which the number of branches is too large to handle even using BaB. In these cases, providing more time to BaB is not efficient, as the number of branches treated is linear with the time-out. Such cases arise with verification-agnostic DNNs, non-local properties (e.g. global robustness, computing Lipschitz bound, compressed sensing), etc.
To handle complex instances, we revisit a divide-and-conquer approach to break down the complexity: instead of few complex BaB calls, we rely on many small {\em partial} MILP calls. The crucial step is to select very few but very important ReLUs to treat using (costly) binary variables. The previous attempts were suboptimal in that respect. In this paper, we propose a novel {\em solution-aware} ReLU scoring (SAS), as well as adapt the BaB-SR and BaB-FSB branching functions as global ReLU scoring (GS) functions. We compare them theoretically as well as experimentally. Surprisingly perhaps, the most accurate solution (SAS) for selecting ReLUs to treat as binary variables is different from the most efficient solution (GS) to branch within this selection. Compared with previous attempts, SAS reduces the number of binary variables by around 6 times while maintaining the same level of accuracy. Implemented in Hybrid MILP, calling first $\alpha,\beta$-Crown with a short time-out to solve easier instances, and then partial MILP, produces a very accurate yet efficient verifier, reducing by up to $40%$ the number of undecided instances to low levels ($8-15%$), while keeping a reasonable runtime ($46s-417s$ on average per instance), even for fairly large CIFAR CNNs with 20 000 nodes.
This program is tentative and subject to change.
Thu 30 OctDisplayed time zone: Chennai, Kolkata, Mumbai, New Delhi change
11:00 - 12:30 | |||
11:00 30mPaper | Inductive Generalization in Reinforcement Learning from Specifications ATVA Papers Vignesh Subramanian , Rohit Kushwah , Subhajit Roy IIT Kanpur, Suguman Bansal Georgia Institute of Technology, USA | ||
11:30 30mPaper | Locally Pareto-Optimal Interpretations for Black-Box Machine Learning Models ATVA Papers Aniruddha Joshi UC Berkeley, Supratik Chakraborty IIT Bombay, S. Akshay , Shetal Shah IIT Bombay, India, Hazem Torfah Chalmers University of Technology, Sanjit Seshia UC Berkeley | ||
12:00 30mPaper | Solution-aware vs global ReLU selection: partial MILP strikes back for DNN verification ATVA Papers Yuke Liao CNRS@CREATE, Singapore, Blaise Genest IPAL - CNRS - CNRS@CREATE, Kuldeep S. Meel National University of Singapore, Shaan Aryaman NYU | ||