Foundations and Trends® in Machine Learning >
Vol 8 > Issue 1-2

Joel A. Tropp (2015), "An Introduction to Matrix Concentration Inequalities", Foundations and Trends® in Machine Learning: Vol. 8: No. 1-2, pp 1-230. http://dx.doi.org/10.1561/2200000048

© 2015 J. A. Tropp

Dimensionality reduction, Kernel methods, Randomness in computation, Design and analysis of algorithms, Information theory and computer science, Information theory and statistics, Quantum information processing, Randomized algorithms in signal processing, Sparse representations, Statistical signal processing

Download article
**In this article:**

Preface

1. Introduction

2. Matrix Functions & Probability with Matrices

3. The Matrix Laplace Transform Method

4. Matrix Gaussian Series & Matrix Rademacher Series

5. A Sum of Random Positive-Semidefinite Matrices

6. A Sum of Bounded Random Matrices

7. Results Involving the Intrinsic Dimension

8. A Proof of Lieb’s Theorem

Appendices

References

Random matrices now play a role in many areas of theoretical, applied, and computational mathematics. Therefore, it is desirable to have tools for studying random matrices that are flexible, easy to use, and powerful. Over the last fifteen years, researchers have developed a remarkable family of results, called matrix concentration inequalities, that achieve all of these goals.

This monograph offers an invitation to the field of matrix concentration inequalities. It begins with some history of random matrix theory; it describes a flexible model for random matrices that is suitable for many problems; and it discusses the most important matrix concentration results. To demonstrate the value of these techniques, the presentation includes examples drawn from statistics, machine learning, optimization, combinatorics, algorithms, scientific computing, and beyond.

Preface

1. Introduction

2. Matrix Functions & Probability with Matrices

3. The Matrix Laplace Transform Method

4. Matrix Gaussian Series & Matrix Rademacher Series

5. A Sum of Random Positive-Semidefinite Matrices

6. A Sum of Bounded Random Matrices

7. Results Involving the Intrinsic Dimension

8. A Proof of Lieb’s Theorem

Appendices

References

Random matrices now play a role in many areas of theoretical, applied, and computational mathematics. It is therefore desirable to have tools for studying random matrices that are flexible, easy to use, and powerful. Over the last fifteen years, researchers have developed a remarkable family of results, called matrix concentration inequalities, that achieve all of these goals.

This monograph offers an invitation to the field of matrix concentration inequalities. It begins with some history of random matrix theory; it describes a flexible model for random matrices that is suitable for many problems; and it discusses the most important matrix concentration results. To demonstrate the value of these techniques, the presentation includes examples drawn from statistics, machine learning, optimization, combinatorics, algorithms, scientific computing, and beyond.