Foundations and Trends® in Programming Languages > Vol 7 > Issue 1–2

Introduction to Neural Network Verification

By Aws Albarghouthi, University of Wisconsin–Madison, USA, aws@cs.wisc.edu

 
Suggested Citation
Aws Albarghouthi (2021), "Introduction to Neural Network Verification", Foundations and Trends® in Programming Languages: Vol. 7: No. 1–2, pp 1-157. http://dx.doi.org/10.1561/2500000051

Publication Date: 02 Dec 2021
© 2021 A. Albarghouthi
 
Subjects
Program verification,  Deep learning
 

Free Preview:

Download extract

Share

Download article
In this article:
1. A New Beginning
2. Neural Networks as Graphs
3. Correctness Properties
4. Logics and Satisfiability
5. Encodings of Neural Networks
6. DPLL Modulo Theories
7. Neural Theory Solvers
8. Neural Interval Abstraction
9. Neural Zonotope Abstraction
10. Neural Polyhedron Abstraction
11. Verifying with Abstract Interpretation
12. Abstract Training of Neural Networks
13. The Challenges Ahead
Acknowledgements
References

Abstract

Deep learning has transformed the way we think of software and what it can do. But deep neural networks are fragile and their behaviors are often surprising. In many settings, we need to provide formal guarantees on the safety, security, correctness, or robustness of neural networks. This monograph covers foundational ideas from formal verification and their adaptation to reasoning about neural networks and deep learning.

DOI:10.1561/2500000051
ISBN: 978-1-68083-910-4
180 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-68083-911-1
180 pp. $280.00
Buy E-book (.pdf)
Table of contents:
1. A New Beginning
2. Neural Networks as Graphs
3. Correctness Properties
4. Logics and Satisfiability
5. Encodings of Neural Networks
6. DPLL Modulo Theories
7. Neural Theory Solvers
8. Neural Interval Abstraction
9. Neural Zonotope Abstraction
10. Neural Polyhedron Abstraction
11. Verifying with Abstract Interpretation
12. Abstract Training of Neural Networks
13. The Challenges Ahead
Acknowledgements
References

Introduction to Neural Network Verification

Over the past decade, a number of hardware and software advances have conspired to thrust deep learning and neural networks to the forefront of computing. Deep learning has created a qualitative shift in our conception of what software is and what it can do: Every day we’re seeing new applications of deep learning, from healthcare to art, and it feels like we’re only scratching the surface of a universe of new possibilities.

This book offers the first introduction of foundational ideas from automated verification as applied to deep neural networks and deep learning. It is divided into three parts:

Part 1 defines neural networks as data-flow graphs of operators over real-valued inputs.

Part 2 discusses constraint-based techniques for verification.

Part 3 discusses abstraction-based techniques for verification.

The book is a self-contained treatment of a topic that sits at the intersection of machine learning and formal verification. It can serve as an introduction to the field for first-year graduate students or senior undergraduates, even if they have not been exposed to deep learning or verification.

 
PGL-051