Abstract
This paper considers a matrix optimization problem where the objective function is continuously differentiable and the constraints involve a semidefinite-box constraint and a rank constraint. We first replace the rank constraint by adding a non-Lipschitz penalty function in the objective and prove that this penalty problem is exact with respect to the original problem. Next, for the penalty problem we present a nonmonotone proximal gradient (NPG) algorithm whose subproblem can be solved by Newton's method with globally quadratic convergence. We also prove the convergence of the NPG algorithm to a first-order stationary point of the penalty problem. Furthermore, based on the NPG algorithm, we propose an adaptive penalty method (APM) for solving the original problem. Finally, the efficiency of an APM is shown via numerical experiments for the sensor network localization problem and the nearest low-rank correlation matrix problem.
Original language | English |
---|---|
Pages (from-to) | 563-586 |
Number of pages | 24 |
Journal | IMA Journal of Numerical Analysis |
Volume | 40 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 2020 |
Keywords
- non-Lipschitz penalty
- nonmonotone proximal gradient
- penalty method
- rank constrained optimization
ASJC Scopus subject areas
- General Mathematics
- Computational Mathematics
- Applied Mathematics