Convergence of Inexact Quasisubgradient Methods with Extrapolation

Xiaoqi Yang, Chenchen Zu

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)

Abstract

In this paper, we investigate an inexact quasisubgradient method with extrapolation for solving a quasiconvex optimization problem with a closed, convex and bounded constraint set. We establish the convergence in objective values, iteration complexity and rate of convergence for our proposed method under Hölder condition and weak sharp minima condition. When both diminishing stepsize and extrapolation stepsize are decaying as a power function, we obtain explicit iteration complexities. When diminishing stepsize is decaying as a power function and the extrapolation stepsize is decreasing not less than a power function, the diminishing stepsize provides a rate of convergence O(τks)(s∈(0,1)) to an optimal solution or to a ball of the optimal solution set, which is faster than O(1 / kβ) (for each β> 0). With geometrically decreasing extrapolation stepsize, we obtain a linear rate of convergence to a ball of the optimal solution set for the constant stepsize and dynamic stepsize. Our numerical testing shows that the performance with extrapolation is much more efficient than that without extrapolation in terms of the number of iterations needed for reaching an approximate optimal solution.

Original languageEnglish
Pages (from-to)676-703
Number of pages28
JournalJournal of Optimization Theory and Applications
Volume193
Issue number1-3
DOIs
Publication statusPublished - Mar 2022

Keywords

  • Extrapolation
  • Iteration complexity
  • Quasiconvex
  • Quasisubgradient
  • Rate of convergence

ASJC Scopus subject areas

  • Management Science and Operations Research
  • Control and Optimization
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Convergence of Inexact Quasisubgradient Methods with Extrapolation'. Together they form a unique fingerprint.

Cite this