Neural Ordinary Differential Equations (NODEs) are a novel neural architecture, built around initial value problems with learned dynamics. Thought to be inherently more robust against adversarial perturbations, they were recently shown to be vulnerable to strong adversarial attacks, highlighting the need for formal guarantees. In this work, we tackle this challenge and propose GAINS, an analysis framework for NODEs based on three key ideas: (i) a novel class of ODE solvers, based on variable but discrete time steps, (ii) an efficient graph representation of solver trajectories, and (iii) a bound propagation algorithm operating on this graph representation. Together, these advances enable the efficient analysis and certified training of high-dimensional NODEs, which we demonstrate in an extensive evaluation on computer vision and time-series forecasting problems.

@inproceedings{ zeqiri2022efficient, title={Efficient Robustness Verification of Neural Ordinary Differential Equations}, author={Mustafa Zeqiri and Mark Niklas Mueller and Marc Fischer and Martin Vechev}, booktitle={The Symbiosis of Deep Learning and Differential Equations II}, year={2022}, url={} }