Nonlinear Lagrangian theory for nonconvex optimization

C. J. Goh, Xiaoqi Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

31 Citations (Scopus)


The Lagrangian function in the conventional theory for solving constrained optimization problems is a linear combination of the cost and constraint functions. Typically, the optimality conditions based on linear Lagrangian theory are either necessary or sufficient, but not both unless the underlying cost and constraint functions are also convex. We propose a somewhat different approach for solving a nonconvex inequality constrained optimization problem based on a nonlinear Lagrangian function. This leads to optimality conditions which are both sufficient and necessary, without any convexity assumption. Subsequently, under appropriate assumptions, the optimality conditions derived from the new nonlinear Lagrangian approach are used to obtain an equivalent root-finding problem. By appropriately defining a dual optimization problem and an alternative dual problem, we show that zero duality gap will hold always regardless of convexity, contrary to the case of linear Lagrangian duality.
Original languageEnglish
Pages (from-to)99-121
Number of pages23
JournalJournal of Optimization Theory and Applications
Issue number1
Publication statusPublished - 1 Apr 2001


  • Inequality constraints
  • Nonconvex optimization
  • Nonlinear Lagrangian
  • Sufficient and necessary conditions
  • Zero duality gap

ASJC Scopus subject areas

  • Control and Optimization
  • Management Science and Operations Research
  • Applied Mathematics


Dive into the research topics of 'Nonlinear Lagrangian theory for nonconvex optimization'. Together they form a unique fingerprint.

Cite this