Conditional subgradient methods for constrained quasi-convex optimization problems

Yaohua Hu, Carisa Kwok Wai Yu, Chong Li, Xiaoqi Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

10 Citations (Scopus)

Abstract

Subgradient methods for solving quasi-convex optimization problems have been well studied. However, the usual subgradient method usually suffers from a zig-zagging phenomena and sustains a slow convergence in many applications. To avert the zig-zagging phenomenon and speed up the convergence behavior, we introduce a conditional subgradient method to solve a nondifferen-tiable constrained quasi-convex optimization problem in this paper. At each iteration, a conditional quasi-subgradient, constructed by a unit quasi-subgradient and a normal vector to the constraint set, is employed in place of the quasi-subgradient, as in the usual subgradient method. Assuming the Holder condition of order p, we investigate convergence properties, in terms of both function values and iterates, of the conditional subgradient method by using the constant, diminishing and dynamic stepsize rules, respectively. We also describe the finite convergence behavior of the conditional subgradient method when the interior of optimal solution set is nonempty. Extending to the inexact setting, we further propose a conditional e-subgradient method and establish its convergence results under the assumption that the computational error is deterministic and bounded.
Original languageEnglish
Pages (from-to)2143-2158
Number of pages16
JournalJournal of Nonlinear and Convex Analysis
Volume17
Issue number10
Publication statusPublished - 1 Jan 2016

Keywords

  • Conditional subgradient method
  • Convergence analysis
  • E-subgradient method
  • Quasi-convex optimization

ASJC Scopus subject areas

  • Analysis
  • Geometry and Topology
  • Control and Optimization
  • Applied Mathematics

Cite this