Abstract
This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem.
Original language | English |
---|---|
Pages (from-to) | 27-39 |
Number of pages | 13 |
Journal | Neural Networks |
Volume | 39 |
DOIs | |
Publication status | Published - 1 Mar 2013 |
Externally published | Yes |
Keywords
- Finite-time convergence
- K winners take all (k-WTA)
- Quadratic programming
- Recurrent neural network
- Stability
ASJC Scopus subject areas
- Artificial Intelligence
- Cognitive Neuroscience