Convergence of the BFGS method for LC1 convex constrained optimization

Research output: Journal article publicationJournal articleAcademic researchpeer-review

28 Citations (Scopus)

Abstract

This paper proposes a BFGS-SQP method for linearly constrained optimization where the objective function f is required only to have a Lipschitz gradient. The Karush-Kuhn-Tucker system of the problem is equivalent to a system of nonsmooth equations F(v) = 0. At every step a quasi-Newton matrix is updated if ∥F(vk)∥ satisfies a rule. This method converges globally, and the rate of convergence is superlinear when f is twice strongly differentiable at a solution of the optimization problem. No assumptions on the constraints are required. This generalizes the classical convergence theory of the BFGS method, which requires a twice continuous differentiability assumption on the objective function. Applications to stochastic programs with recourse on a CM5 parallel computer are discussed.
Original languageEnglish
Pages (from-to)2051-2063
Number of pages13
JournalSIAM Journal on Control and Optimization
Volume34
Issue number6
DOIs
Publication statusPublished - 1 Jan 1996
Externally publishedYes

Keywords

  • Convex programming
  • Nonsmooth equations
  • Quasi-Newton methods

ASJC Scopus subject areas

  • Control and Optimization
  • Applied Mathematics

Cite this