We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts: (i) abstract layers for fine-tuning the precision and scalability of the abstraction, (ii) a flexible domain specific language (DSL) for describing training objectives that combine abstract and concrete losses with arbitrary specifications. Our training method is implemented in the DiffAI system.


@inproceedings{mirman2019provable, title={A Provable Defense for Deep Residual Networks}, author={Mirman, Matthew and Singh, Gagandeep and Vechev, Martin}, journal={arXiv preprint arXiv:1903.12519}, year={2019}}