Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks

Shuai Li, Bo Liu, Yangming Li

Research output: Journal article publicationJournal articleAcademic researchpeer-review

81 Citations (Scopus)


The winner-take-all (WTA) competition is widely observed in both inanimate and biological media and society. Many mathematical models are proposed to describe the phenomena discovered in different fields. These models are capable of demonstrating the WTA competition. However, they are often very complicated due to the compromise with experimental realities in the particular fields; it is often difficult to explain the underlying mechanism of such a competition from the perspective of feedback based on those sophisticate models. In this paper, we make steps in that direction and present a simple model, which produces the WTA competition by taking advantage of selective positive-negative feedback through the interaction of neurons via p-norm. Compared to existing models, this model has an explicit explanation of the competition mechanism. The ultimate convergence behavior of this model is proven analytically. The convergence rate is discussed and simulations are conducted in both static and dynamic competition scenarios. Both theoretical and numerical results validate the effectiveness of the dynamic equation in describing the nonlinear phenomena of WTA competition.
Original languageEnglish
Article number6392288
Pages (from-to)301-309
Number of pages9
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number2
Publication statusPublished - 1 Jan 2013
Externally publishedYes


  • Competition
  • Nonlinear
  • Recurrent neural networks
  • Selective positive-negative feedback
  • Winner-take-all (WTA)

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks'. Together they form a unique fingerprint.

Cite this