Foundations and Trends® in Signal Processing > Vol 1 > Issue 4

Rethinking Biased Estimation: Improving Maximum Likelihood and the Cramér–Rao Bound

By Yonina C. Eldar, Department of Electrical Engineering, Technion — Israel Institute of Technology, Israel, yonina@ee.technion.ac.il

 
Suggested Citation
Yonina C. Eldar (2008), "Rethinking Biased Estimation: Improving Maximum Likelihood and the Cramér–Rao Bound", Foundations and Trends® in Signal Processing: Vol. 1: No. 4, pp 305-449. http://dx.doi.org/10.1561/2000000008

Publication Date: 04 Jul 2008
© 2008 Y. C. Eldar
 
Subjects
Statistical signal processing
 

Free Preview:

Download extract

Share

Download article
In this article:
1 Introduction 
2 The Cramér–Rao Bound and Extensions 
3 Mean-Squared Error Bounds 
4 Minimax and Blind Minimax Estimation 
5 The SURE Principle 
6 Bounded Error Estimation 
Acknowledgments 
Notations and Acronyms 
A Convex Optimization Methods 
References 

Abstract

One of the prime goals of statistical estimation theory is the development of performance bounds when estimating parameters of interest in a given model, as well as constructing estimators that achieve these limits. When the parameters to be estimated are deterministic, a popular approach is to bound the mean-squared error (MSE) achievable within the class of unbiased estimators. Although it is well-known that lower MSE can be obtained by allowing for a bias, in applications it is typically unclear how to choose an appropriate bias.

In this survey we introduce MSE bounds that are lower than the unbiased Cramér–Rao bound (CRB) for all values of the unknowns. We then present a general framework for constructing biased estimators with smaller MSE than the standard maximum-likelihood (ML) approach, regardless of the true unknown values. Specializing the results to the linear Gaussian model, we derive a class of estimators that dominate least-squares in terms of MSE. We also introduce methods for choosing regularization parameters in penalized ML estimators that outperform standard techniques such as cross validation.

DOI:10.1561/2000000008
ISBN: 978-1-60198-130-1
156 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-60198-131-8
156 pp. $125.00
Buy E-book (.pdf)
Table of contents:
1: Introduction
2: The Cramer-Rao Bound and Extensions
3: Mean-Squared Error Bounds
4: Minimax and Blind Minimax Estimation
5: The SURE Principle
6: Bounded Error Estimation
Acknowledgements
Notations and Acronyms
A Convex Optimization Methods
References

Rethinking Biased Estimation

Rethinking Biased Estimation discusses methods to improve the accuracy of unbiased estimators used in many signal processing problems. At the heart of the proposed methodology is the use of the mean-squared error (MSE) as the performance criteria. One of the prime goals of statistical estimation theory is the development of performance bounds when estimating parameters of interest in a given model, as well as constructing estimators that achieve these limits. When the parameters to be estimated are deterministic, a popular approach is to bound the MSE achievable within the class of unbiased estimators. Although it is well-known that lower MSE can be obtained by allowing for a bias, in applications it is typically unclear how to choose an appropriate bias. Rethinking Biased Estimation introduces MSE bounds that are lower than the unbiased Cramer-Rao bound (CRB) for all values of the unknowns. It then presents a general framework for constructing biased estimators with smaller MSE than the standard maximum-likelihood (ML) approach, regardless of the true unknown values. Specializing the results to the linear Gaussian model, it derives a class of estimators that dominate least-squares in terms of MSE. It also introduces methods for choosing regularization parameters in penalized ML estimators that outperform standard techniques such as cross validation.

 
SIG-008