Covariance selection and estimation via penalised normal likelihood
Jianhua Z. Huang, Naiping Liu, Mohsen Pourahmadi, Linxu Liu
Abstract:
We propose a nonparametric method to identify parsimony
and to produce a statistically efficient
estimator of a large covariance matrix. We reparameterise
a covariance matrix through the modified Cholesky decomposition of its
inverse or the one-step-ahead predictive representation of the vector of
responses and reduce the nonintuitive task of modelling covariance matrices to
the familiar task of model selection and estimation for a sequence of
regression models. The Cholesky factor containing these regression
coefficients is likely to have many off-diagonal elements that are zero or
close to zero. Penalised normal likelihoods in this situation with $ L_1$
and $L_2$ penalties are shown to be closely related to
Tibshirani's (1996) LASSO approach and to ridge regression.
Adding either penalty to the likelihood helps to produce more stable
estimators by introducing shrinkage to the elements in the Cholesky factor,
while, because of its singularity, the $L_1$ penalty will set some
elements to zero and
produce interpretable models. An algorithm is developed to
compute the estimator and select the tuning parameter.
The proposed maximum penalised likelihood estimator is illustrated
using simulation and a real dataset involving estimation
of a $102\times 102$ covariance matrix.
Some key words: Cholesky decomposition; Crossvalidation;
LASSO; $L_p$ penalty; Model selection; Penalised likelihood;
Shrinkage.