Existing neural network verifiers compute a proof that each input is handled
correctly under a given perturbation by propagating a convex set of reachable
values at each layer. This process is repeated independently for each input
(e.g., image) and perturbation (e.g., rotation), leading to an expensive
overall proof effort when handling an entire dataset. In this work we introduce
a new method for reducing this verification cost based on the key insight that
convex sets obtained at intermediate layers can overlap across different inputs
and perturbations. Leveraging this insight, we introduce the general concept of
shared certificates, enabling proof effort reuse across multiple inputs and
driving down overall verification costs. We validate our insight via an
extensive experimental evaluation and demonstrate the effectiveness of shared
certificates on a range of datasets and attack specifications including
geometric, patch and $ell_infty$ input perturbations.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Sprecher_C/0/1/0/all/0/1">Christian Sprecher</a>, <a href="http://arxiv.org/find/cs/1/au:+Fischer_M/0/1/0/all/0/1">Marc Fischer</a>, <a href="http://arxiv.org/find/cs/1/au:+Dimitrov_D/0/1/0/all/0/1">Dimitar I. Dimitrov</a>, <a href="http://arxiv.org/find/cs/1/au:+Singh_G/0/1/0/all/0/1">Gagandeep Singh</a>, <a href="http://arxiv.org/find/cs/1/au:+Vechev_M/0/1/0/all/0/1">Martin Vechev</a>

By admin