We consider robust covariance estimation with an emphasis on Tyler’s M-estimator. This method provides accurate inference of an unknown covariance in non-standard settings, including heavy-tailed distributions and outlier contaminated scenarios. We begin with a survey of the estimator and its various derivations in the classical unconstrained settings. The latter rely on the theory of g-convex analysis which we briefly review. Building on this background, we enhance robust covariance estimation via g-convex regularization, and allow accurate inference using a smaller number of samples. We consider shrinkage, diagonal loading, and prior knowledge in the form of symmetry and Kronecker structures. We introduce these concepts to the world of robust covariance estimation, and demonstrate how to exploit them in a computationally and statistically efficient manner.
Covariance matrices have found applications in many diverse areas. These include beamforming in array processing; portfolio analysis in finance; classification of data and the handling of high-frequency data.
Structured Robust Covariance Estimation considers the estimation of covariance matrices in non-standard conditions including heavy-tailed distributions and outlier contamination. Prior knowledge on the structure of these matrices is exploited in order to improve the estimation accuracy. The distributions, structures and algorithms are all based on an extension of convex optimization to manifolds.
Structured Robust Covariance Estimation also provides a self-contained introduction and survey of the theory known as geodesic convexity. This is a generalized form of convexity associated with positive definite matrix variables. The fundamental g-convex sets and functions are detailed, along with the operations that preserve them, and their application to covariance estimation.
This monograph will be of interest to researchers and students working in signal processing, statistics and optimization.