Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems

Yaohua Hu, Chong Li, Kaiwen Meng, Xiaoqi Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

The ℓp regularization problem with 0 < p< 1 has been widely studied for finding sparse solutions of linear inverse problems and gained successful applications in various mathematics and applied science fields. The proximal gradient algorithm is one of the most popular algorithms for solving the ℓp regularisation problem. In the present paper, we investigate the linear convergence issue of one inexact descent method and two inexact proximal gradient algorithms (PGA). For this purpose, an optimality condition theorem is explored to provide the equivalences among a local minimum, second-order optimality condition and second-order growth property of the ℓp regularization problem. By virtue of the second-order optimality condition and second-order growth property, we establish the linear convergence properties of the inexact descent method and inexact PGAs under some simple assumptions. Both linear convergence to a local minimal value and linear convergence to a local minimum are provided. Finally, the linear convergence results of these methods are extended to the infinite-dimensional Hilbert spaces. Our results cannot be established under the framework of Kurdyka–Łojasiewicz theory.

Original languageEnglish
Pages (from-to)853-883
Number of pages31
JournalJournal of Global Optimization
Volume79
Issue number4
DOIs
Publication statusPublished - Apr 2021

Keywords

  • Descent methods
  • Inexact approach
  • Linear convergence
  • Nonconvex regularization
  • Proximal gradient algorithms
  • Sparse optimization

ASJC Scopus subject areas

  • Computer Science Applications
  • Management Science and Operations Research
  • Control and Optimization
  • Applied Mathematics

Cite this