Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems

Research output: Journal article publicationJournal articleAcademic researchpeer-review

52 Citations (Scopus)

Abstract

In this paper, we study the proximal gradient algorithm with extrapolation for minimizing the sum of a Lipschitz differentiable function and a proper closed convex function. Under the error bound condition used in [Ann. Oper. Res., 46 (1993), pp. 157-178] for analyzing the convergence of the proximal gradient algorithm, we show that there exists a threshold such that if the extrapolation coefficients are chosen below this threshold, then the sequence generated converges R-linearly to a stationary point of the problem. Moreover, the corresponding sequence of objective values is also R-linearly convergent. In addition, the threshold reduces to 1 for convex problems, and as a consequence we obtain the R-linear convergence of the sequence generated by FISTA with fixed restart. Finally, we present some numerical experiments to illustrate our results.
Original languageEnglish
Pages (from-to)124-145
Number of pages22
JournalSIAM Journal on Optimization
Volume27
Issue number1
DOIs
Publication statusPublished - 26 Jan 2017

Keywords

  • Accelerated gradient method
  • Convex minimization
  • Error bound
  • Extrapolation
  • Linear convergence
  • Nonconvex nonsmooth minimization

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems'. Together they form a unique fingerprint.

Cite this