Near-optimal control of nonlinear dynamical systems: A brief survey

Yinyan Zhang, Shuai Li, Liefa Liao

Research output: Journal article publicationReview articleAcademic researchpeer-review

47 Citations (Scopus)


For nonlinear dynamical systems, an optimal control problem generally requires solving a partial differential equation called the Hamilton–Jacobi–Bellman equation, the analytical solution of which generally cannot be obtained. However, the demand for optimal control keeps increasing, with the goal to save energy, reduce transient time, minimize error accumulation, etc. Consequently, methods were reported to approximately solve the problem leading to the so-called near-optimal control, although their technical details differ. This research direction has experienced great progress in recent years but a timely review of it is still missing. This paper serves as a brief survey for existing methods in this research direction.

Original languageEnglish
Pages (from-to)71-80
Number of pages10
JournalAnnual Reviews in Control
Publication statusPublished - 1 Jan 2019


  • Dynamic programming
  • Near-optimal control
  • Nonlinear dynamical system
  • Nonlinear programming

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering


Dive into the research topics of 'Near-optimal control of nonlinear dynamical systems: A brief survey'. Together they form a unique fingerprint.

Cite this