An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss

C. Wang, Binyan Jiang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

4 Citations (Scopus)


The estimation of high dimensional precision matrices has been a central topic in statistical learning. However, as the number of parameters scales quadratically with the dimension p, many state-of-the-art methods do not scale well to solve problems with a very large p. In this paper, we propose a very efficient algorithm for precision matrix estimation via penalized quadratic loss functions. Under the high dimension low sample size setting, the computation complexity of our algorithm is linear in both the sample size and the number of parameters. Such a computation complexity is in some sense optimal, as it is the same as the complexity needed for computing the sample covariance matrix. Numerical studies show that our algorithm is much more efficient than other state-of-the-art methods when the dimension p is very large.

Original languageEnglish
Article number106812
Pages (from-to)1-12
Number of pages12
JournalComputational Statistics and Data Analysis
Publication statusPublished - Feb 2020


  • ADMM
  • High dimension
  • Penalized quadratic loss
  • Precision matrix

ASJC Scopus subject areas

  • Statistics and Probability
  • Computational Mathematics
  • Computational Theory and Mathematics
  • Applied Mathematics

Cite this