This blog post summarizes the key findings of our paper On the Paradox Of Certified Training, which was recently published in TMLR.

We attempt to explain the phenomenon where most state-of-the-art methods for certified training based on convex relaxations (such as FastIBP or the latest breakthrough SABR) focus on the loose interval propagation (IBP/Box), while intuitively, tighter relaxations (i.e., the ones that more tightly overapproximate the non-linearities in the network) should lead to better results. This was already observed in many prior works, which proposed several hypotheses. However, the paradox was never investigated in a principled way.

We start by proposing a way to quantify tightness (see the paper for details), and thoroughly reproducing the paradox: Across a wide range of settings, tighter relaxations consistently lead to lower certified robustness (in %) than the loose IBP relaxation. An example is shown in the following table, grouping equivalent methods from prior work (below we will refer to each group using the name in bold):

Relaxation Tightness Certified (%)
IBP / Box 0.73 86.8
hBox / Symbolic Intervals 1.76 83.7
CROWN / DeepPoly 3.36 70.2
DeepZ / CAP / FastLin / Neurify 3.00 69.8
CROWN-IBP (R) 2.15 75.4

Our key observation is that there are other latent properties of relaxations, besides tightness, that affect success when relaxations are used in a training procedure. More concretely, each of the tighter relaxations has either unfavorable continuity (i.e., the corresponding loss function is discontinuous with respect to network weights) or unfavorable sensitivity (i.e., the corresponding loss function is highly sensitive to small perturbations of network weights), both preventing successful optimization. By observing all three properties jointly, we can more easily interpret the seemingly counterintuitive results.

The plot below shows the relaxation of the ReLU non-linearity used by CROWN, for the example input range defined by $l=-5$ and $u=8$. By reducing $u$ (using the bottom slider), we can directly observe the discontinuity of CROWN, when its heuristic choice of the lower linear bound changes at $|l|=|u|$. Using the same plot we can observe the discontinuities of hBox at $l=0$. These observations imply discontinuities in the loss when a relaxation is used in training, which we further empirically observe in real scenarios. Finally, we can use the plot below to observe that no discontinuities can be found for IBP and DeepZ—a formal proof of their continuity in the general case is given in the paper.

Open Interactive Plot

While the sensitivity of the loss functions is harder to illustrate on a toy example as above, our derivation (Section 4.3 of the paper) demonstrates that the bounds used by CROWN, CROWN-IBP (R) and DeepZ lead to certified training losses highly sensitive to small changes in network weights, while the losses of IBP and hBox are not sensitive and induce more favorable loss landscapes. With these observations, we expand the table shown earlier to include all three relaxation properties: tightness, continuity and sensitivity. This illustrates that attempts to use tighter relaxations in certified training have introduced unfavorable properties of the loss, which resulted in the failure to outperform the continuous and non-sensitive IBP.

Relaxation Tightness Continuity Sensitivity Certified (%)
IBP / Box 0.73 $\checkmark$ $\checkmark$ 86.8
hBox / Symbolic Intervals 1.76 $\times$ $\checkmark$ 83.7
CROWN / DeepPoly 3.36 $\times$ $\times$ 70.2
DeepZ / CAP / FastLin / Neurify 3.00 $\checkmark$ $\times$ 69.8
CROWN-IBP (R) 2.15 $\times$ $\times$ 75.4

Next steps

A natural question that arises is the one of improving unfavorable properties of relaxations to make them more suitable for certified training. In the paper we systematically investigate modifications of existing relaxations and find that designing a relaxation with all favorable properties is difficult, as the properties induce complex tradeoffs that depend on the setting. Still, such relaxations may exist, and future work might be able to utilize our findings to discover them.

A more promising approach seems to be the use of existing convex relaxations with modified training procedures designed to best utilize the benefits of each relaxation. Recent successful examples of this approach include COLT, which includes the counterexample search in training, CROWN-IBP, which heuristically combines the losses of two relaxations in training, and two recent methods which focus on IBP, aiming to improve its training procedure via better initialization and regularization (FastIBP) or the propagation of smaller input regions in training (SABR).

Finally, it is worth noting that there are other promising approaches for neural network certification that do not use convex relaxations and are thus not affected by tradeoffs between relaxation properties. Examples in this direction include Randomized Smoothing, offering high-probability robustness certificates, and custom certification-friendly model architectures such as $l_\infty$-distance nets. While not affected by limitations of convex relaxations, these approaches introduce other challenges such as optimization difficulties and additional inference-time work.