Abstract
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.
Original language | English |
---|---|
Pages (from-to) | 363-381 |
Number of pages | 19 |
Journal | Journal of Global Optimization |
Volume | 19 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1 Apr 2001 |
Keywords
- Asymptotic stability
- Equilibrium point
- Equilibrium set
- Exponential stability
- Gradient-based neural network
ASJC Scopus subject areas
- Computer Science Applications
- Control and Optimization
- Management Science and Operations Research
- Applied Mathematics