This is published under the terms of the Creative Commons Attribution licence.
Downloaded: 714 times
The covariance matrix of signals is one of the most essential information in multivariate analysis and other signal processing techniques. The estimation accuracy of a covariance matrix is degraded when some eigenvalues of the matrix are almost duplicated. Although the degradation is theoretically analyzed in the asymptotic case of infinite variables and observations, the degradation in finite cases are still open. This paper tackles the problem using the Bayesian approach, where the learning coefficient represents the generalization error. The learning coefficient is derived in a special case, i.e., the covariance matrix is spiked (all eigenvalues take the same value except one) and a shrinkage estimation method is employed. Our theoretical analysis shows a non-monotonic property that the learning coefficient increases as the difference of eigenvalues increases until a critical point and then decreases from the point and converged to the distinct case. The result is validated by numerical experiments.