Foundations and Trends® in Machine Learning > Vol 8 > Issue 1-2

An Introduction to Matrix Concentration Inequalities

By Joel A. Tropp, California Institute of Technology, USA, jtropp@cms.caltech.edu

 
Suggested Citation
Joel A. Tropp (2015), "An Introduction to Matrix Concentration Inequalities", Foundations and Trends® in Machine Learning: Vol. 8: No. 1-2, pp 1-230. http://dx.doi.org/10.1561/2200000048

Publication Date: 27 May 2015
© 2015 J. A. Tropp
 
Subjects
Dimensionality reduction,  Kernel methods,  Randomness in computation,  Design and analysis of algorithms,  Information theory and computer science,  Information theory and statistics,  Quantum information processing,  Randomized algorithms in signal processing,  Sparse representations,  Statistical signal processing
 

Free Preview:

Download extract

Share

Download article
In this article:
Preface
1. Introduction
2. Matrix Functions & Probability with Matrices
3. The Matrix Laplace Transform Method
4. Matrix Gaussian Series & Matrix Rademacher Series
5. A Sum of Random Positive-Semidefinite Matrices
6. A Sum of Bounded Random Matrices
7. Results Involving the Intrinsic Dimension
8. A Proof of Lieb’s Theorem
Appendices
References

Abstract

Random matrices now play a role in many areas of theoretical, applied, and computational mathematics. Therefore, it is desirable to have tools for studying random matrices that are flexible, easy to use, and powerful. Over the last fifteen years, researchers have developed a remarkable family of results, called matrix concentration inequalities, that achieve all of these goals.

This monograph offers an invitation to the field of matrix concentration inequalities. It begins with some history of random matrix theory; it describes a flexible model for random matrices that is suitable for many problems; and it discusses the most important matrix concentration results. To demonstrate the value of these techniques, the presentation includes examples drawn from statistics, machine learning, optimization, combinatorics, algorithms, scientific computing, and beyond.

DOI:10.1561/2200000048
ISBN: 978-1-60198-838-6
256 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-60198-839-3
256 pp. $250.00
Buy E-book (.pdf)
Table of contents:
Preface
1. Introduction
2. Matrix Functions & Probability with Matrices
3. The Matrix Laplace Transform Method
4. Matrix Gaussian Series & Matrix Rademacher Series
5. A Sum of Random Positive-Semidefinite Matrices
6. A Sum of Bounded Random Matrices
7. Results Involving the Intrinsic Dimension
8. A Proof of Lieb’s Theorem
Appendices
References

An Introduction to Matrix Concentration Inequalities

Random matrices now play a role in many areas of theoretical, applied, and computational mathematics. It is therefore desirable to have tools for studying random matrices that are flexible, easy to use, and powerful. Over the last fifteen years, researchers have developed a remarkable family of results, called matrix concentration inequalities, that achieve all of these goals.

This monograph offers an invitation to the field of matrix concentration inequalities. It begins with some history of random matrix theory; it describes a flexible model for random matrices that is suitable for many problems; and it discusses the most important matrix concentration results. To demonstrate the value of these techniques, the presentation includes examples drawn from statistics, machine learning, optimization, combinatorics, algorithms, scientific computing, and beyond.

 
MAL-048