On convergence rates of linea rized proximal algorithms for convex composite optimization with applications

Yaohua Hu, Chong Li, Xiaoqi Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

30 Citations (Scopus)

Abstract

In the present paper, we investigate a linearized p roximal algorithm (LPA) for solving a convex composite optimization problem. Each iteration of the LPA is a proximal minimization of the convex composite function with the inner function being linearized at the current iterate. The LPA has the attractive computational advantage that the solution of each subproblem is a singleton, which avoids the difficulty as in the Gauss-Newton method (GNM) of finding a solution with minimum norm among the set of minima of its subproblem, while still maintaining the same local convergence rate as that of the GNM. Under the assumptions of local weak sharp minima of order p (p ∈ [1,2]) and a quasi-regularity condition, we establish a local superlinear convergence rate for the LPA. We also propose a globalization strategy for the LPA based on a backtracking line-search and an inexact version of the LPA. We further apply the LPA to solve a (possibly nonconvex) feasibility problem, as well as a sensor network localization problem. Our numerical results illustrate that the LPA meets the demand for an efficient and robust algorithm for the sensor network localization problem.
Original languageEnglish
Pages (from-to)1207-1235
Number of pages29
JournalSIAM Journal on Optimization
Volume26
Issue number2
DOIs
Publication statusPublished - 1 Jan 2016

Keywords

  • Convex composite optimization
  • Feasibility problem
  • Linearized proximal algorithm
  • Quasi-regularity condition
  • Sensor network localization
  • Weak sharp minima

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Cite this