Foundations and Trends® in Optimization > Vol 4 > Issue 3-4

Algorithms for Verifying Deep Neural Networks

Changliu Liu, Carnegie Mellon University, USA, cliu6@andrew.cmu.edu , Tomer Arnon, Stanford University, USA, tarnon@stanford.edu , Christopher Lazarus, Stanford University, USA, clazarus@stanford.edu , Christopher Strong, Stanford University, USA, castrong@stanford.edu , Clark Barrett, Stanford University, USA, barrett@cs.stanford.edu , Mykel J. Kochenderfer, Stanford University, USA, mykel@stanford.edu
 
Suggested Citation
Changliu Liu, Tomer Arnon, Christopher Lazarus, Christopher Strong, Clark Barrett and Mykel J. Kochenderfer (2021), "Algorithms for Verifying Deep Neural Networks", Foundations and TrendsĀ® in Optimization: Vol. 4: No. 3-4, pp 244-404. http://dx.doi.org/10.1561/2400000035

Publication Date: 11 Feb 2021
© 2021 Changliu Liu, Tomer Arnon, Chris Lazarus, Christopher Strong, Clark Barrett and Mykel J. Kochenderfer
 
Subjects
Optimization
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Problem Formulation
3. Overview of Methods
4. Preliminaries
5. Reachability
6. Primal Optimization
7. Dual Optimization
8. Search and Reachability
9. Search and Optimization
10. Comparison and Results
11. Conclusion
Acknowledgments
References

Abstract

Deep neural networks are widely used for nonlinear function approximation, with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. This article surveys methods that have emerged recently for soundly verifying such properties. These methods borrow insights from reachability analysis, optimization, and search. We discuss fundamental differences and connections between existing algorithms. In addition, we provide pedagogical implementations of existing methods and compare them on a set of benchmark problems.

DOI:10.1561/2400000035
ISBN: 978-1-68083-786-5
176 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-68083-787-2
176 pp. $280.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Problem Formulation
3. Overview of Methods
4. Preliminaries
5. Reachability
6. Primal Optimization
7. Dual Optimization
8. Search and Reachability
9. Search and Optimization
10. Comparison and Results
Acknowledgments
References

Algorithms for Verifying Deep Neural Networks

Neural networks have been widely used in many applications, such as image classification and understanding, language processing, and control of autonomous systems. These networks work by mapping inputs to outputs through a sequence of layers. At each layer, the input to that layer undergoes an affine transformation followed by a simple nonlinear transformation before being passed to the next layer.

Neural networks are being used for increasingly important tasks, and in some cases, incorrect outputs can lead to costly consequences, hence validation of correctness at each layer is vital. The sheer size of the networks makes this not feasible using traditional methods.

In this monograph, the authors survey a class of methods that are capable of formally verifying properties of deep neural networks. In doing so, they introduce a unified mathematical framework for verifying neural networks, classify existing methods under this framework, provide pedagogical implementations of existing methods, and compare those methods on a set of benchmark problems.

Algorithms for Verifying Deep Neural Networks serves as a tutorial for students and professionals interested in this emerging field as well as a benchmark to facilitate the design of new verification algorithms.

 
OPT-035