Abstract convergence theorem for quasi-convex optimization problems with applications

Carisa Kwok Wai Yu, Yaohua Hu, Xiaoqi Yang, Siu Kai Choy

Research output: Journal article publicationJournal articleAcademic researchpeer-review

4 Citations (Scopus)


Quasi-convex optimization is fundamental to the modelling of many practical problems in various fields such as economics, finance and industrial organization. Subgradient methods are practical iterative algorithms for solving large-scale quasi-convex optimization problems. In the present paper, focusing on quasi-convex optimization, we develop an abstract convergence theorem for a class of sequences, which satisfy a general basic inequality, under some suitable assumptions on parameters. The convergence properties in both function values and distances of iterates from the optimal solution set are discussed. The abstract convergence theorem covers relevant results of many types of subgradient methods studied in the literature, for either convex or quasi-convex optimization. Furthermore, we propose a new subgradient method, in which a perturbation of the successive direction is employed at each iteration. As an application of the abstract convergence theorem, we obtain the convergence results of the proposed subgradient method under the assumption of the Hölder condition of order p and by using the constant, diminishing or dynamic stepsize rules, respectively. A preliminary numerical study shows that the proposed method outperforms the standard, stochastic and primal-dual subgradient methods in solving the Cobb–Douglas production efficiency problem.

Original languageEnglish
Pages (from-to)1289-1304
Number of pages16
Issue number7
Publication statusPublished - 26 Mar 2018


  • abstract convergence theorem
  • basic inequality
  • Cobb–Douglas production efficiency problem
  • Quasi-convex programming
  • subgradient method

ASJC Scopus subject areas

  • Control and Optimization
  • Management Science and Operations Research
  • Applied Mathematics

Cite this