Nonlinear Lagrangian theory for nonconvex optimization

C. J. Goh, Xiaoqi Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

29 Citations (Scopus)

Abstract

The Lagrangian function in the conventional theory for solving constrained optimization problems is a linear combination of the cost and constraint functions. Typically, the optimality conditions based on linear Lagrangian theory are either necessary or sufficient, but not both unless the underlying cost and constraint functions are also convex. We propose a somewhat different approach for solving a nonconvex inequality constrained optimization problem based on a nonlinear Lagrangian function. This leads to optimality conditions which are both sufficient and necessary, without any convexity assumption. Subsequently, under appropriate assumptions, the optimality conditions derived from the new nonlinear Lagrangian approach are used to obtain an equivalent root-finding problem. By appropriately defining a dual optimization problem and an alternative dual problem, we show that zero duality gap will hold always regardless of convexity, contrary to the case of linear Lagrangian duality.
Original languageEnglish
Pages (from-to)99-121
Number of pages23
JournalJournal of Optimization Theory and Applications
Volume109
Issue number1
DOIs
Publication statusPublished - 1 Apr 2001

Keywords

  • Inequality constraints
  • Nonconvex optimization
  • Nonlinear Lagrangian
  • Sufficient and necessary conditions
  • Zero duality gap

ASJC Scopus subject areas

  • Control and Optimization
  • Management Science and Operations Research
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Nonlinear Lagrangian theory for nonconvex optimization'. Together they form a unique fingerprint.

Cite this