A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application

Shuai Li, Yangming Li, Zheng Wang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

113 Citations (Scopus)


This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem.
Original languageEnglish
Pages (from-to)27-39
Number of pages13
JournalNeural Networks
Publication statusPublished - 1 Mar 2013
Externally publishedYes


  • Finite-time convergence
  • K winners take all (k-WTA)
  • Quadratic programming
  • Recurrent neural network
  • Stability

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cognitive Neuroscience

Cite this