### Fair Normalizing Flows

Authors: Mislav BalunovićAnian RuossMartin Vechev
Keywords: fair representation learning, normalizing flows
Conference Page: https://iclr.cc/virtual/2022/poster/7045
Poster - Mon Apr 25 19:30 UTC+2 (Poster Session 2)
Fair Normalizing Flows (FNF) are a new approach for encoding data into a new representation in order to ensure fairness and utility in downstream tasks. In practical cases, when we can estimate the probability density of the inputs, FNF guarantees that adversary cannot recover the sensitive attribute from the learned representations. FNF addresses limitation of existing approaches for which stronger adversaries can still recover sensitive attributes. We show that FNF can effectively balance fairness and accuracy on a variety of relevant datasets.

Authors: Dimitar I. DimitrovGagandeep SinghTimon GehrMartin Vechev
Conference Page: https://iclr.cc/virtual/2022/poster/6160
Poster - Tue Apr 26, 19:30 UTC+2 (Poster Session 5)
We introduce the concept of provably robust adversarial examples in deep neural networks. These are adversarial examples that are generated together with a region around them proven to be robust to a set of perturbations. We demonstrate our method, PARADE, for generating such examples in a scalable manner that uses adversarial attack algorithms to generate a candidate region which is then refined until proven robust. Our experiments show PARADE successfully finds large provably robust regions to both pixel intensity and geometric pertrubations containing up to $10^{573}$ and $10^{599}$ individual adversarial examples, respectively.

### Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound

Authors: Claudio FerrariMark Niklas MüllerNikola JovanovićMartin Vechev
Conference Page: https://iclr.cc/virtual/2022/poster/6097
Poster - Wed Apr 27 11:30 UTC+2 (Poster Session 7)
MN-BaB is our most recent neural network verifier that combines precise multi-neuron constraints within the Branch-and-Bound paradigm in one fully GPU-based solver. This combination of the two most successful verifier paradigms allows us to achieve state-of-the-art performance on current benchmarks and perform especially well on networks that were not trained to be easily verifiable and as a result have high natural accuracy.

### Bayesian Framework for Gradient Leakage

Authors: Mislav BalunovićDimitar I. DimitrovRobin StaabMartin Vechev
Keywords: federated learning, privacy, gradient leakage
Conference Page: https://iclr.cc/virtual/2022/poster/6934
Poster - Wed Apr 27 19:30 UTC+2 (Poster Session 8)
Recent work has challenged notion that federated learning preserves data privacy by showing that various attacks can reconstruct original data from gradient updates. In this post, we investigate what is the optimal reconstruction attack and we show how it connects to previously proposed attacks. Furthermore, we also show that most of the existing defenses are not effective against strong attacks. Our findings indicate that the construction of effective defenses and their evaluation remains an open problem.

### Boosting Randomized Smoothing with Variance Reduced Classifiers (Spotlight)

Authors: Miklós Z. HorváthMark Niklas MüllerMarc FischerMartin Vechev
Keywords: randomized smoothing, certified robustness, ensembles
Conference Page: https://iclr.cc/virtual/2022/spotlight/6328
Spotlight - Thu Apr 28, 19:30 UTC+2 (Poster Session 11)
Ensembles are particularly suitable base models for constructing certifiably robust classifiers via Randomized Smoothing (RS). Here, we motivate this result theoretically and share empirical results, showing that they obtain state-of-the-art results in multiple settings. The key insight is that the reduced variance of ensembles over the perturbations introduced in RS leads to significantly more consistent classifications for a given input. This, in turn, leads to substantially increased certifiable radii for samples close to the decision boundary.