Abstract
For nonlinear dynamical systems, an optimal control problem generally requires solving a partial differential equation called the Hamilton–Jacobi–Bellman equation, the analytical solution of which generally cannot be obtained. However, the demand for optimal control keeps increasing, with the goal to save energy, reduce transient time, minimize error accumulation, etc. Consequently, methods were reported to approximately solve the problem leading to the so-called near-optimal control, although their technical details differ. This research direction has experienced great progress in recent years but a timely review of it is still missing. This paper serves as a brief survey for existing methods in this research direction.
Original language | English |
---|---|
Pages (from-to) | 71-80 |
Number of pages | 10 |
Journal | Annual Reviews in Control |
Volume | 47 |
DOIs | |
Publication status | Published - 1 Jan 2019 |
Keywords
- Dynamic programming
- Near-optimal control
- Nonlinear dynamical system
- Nonlinear programming
ASJC Scopus subject areas
- Software
- Control and Systems Engineering