A conjugate gradient learning algorithm for recurrent neural networks

Wing Fai Chang, Man Wai Mak

Research output: Journal article publicationJournal articleAcademic researchpeer-review

18 Citations (Scopus)


The real-time recurrent learning (RTRL) algorithm, which is originally proposed for training recurrent neural networks, requires a large number of iterations for convergence because a small learning rate should be used. While an obvious solution to this problem is to use a large learning rate, this could result in undesirable convergence characteristics. This paper attempts to improve the convergence capability and convergence characteristics of the RTRL algorithm by incorporating conjugate gradient computation into its learning procedure. The resulting algorithm, referred to as the conjugate gradient recurrent learning (CGRL) algorithm, is applied to train fully connected recurrent neural networks to simulate a second-order low-pass filter and to predict the chaotic intensity pulsations of NH3laser. Results show that the CGRL algorithm exhibits substantial improvement in convergence (in terms of the reduction in mean squared error per epoch) as compared to the RTRL and batch mode RTRL algorithms.
Original languageEnglish
Pages (from-to)173-189
Number of pages17
Issue number1-3
Publication statusPublished - 1 Feb 1999


  • Conjugate gradient
  • Real time recurrent learning
  • Recurrent neural networks

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cellular and Molecular Neuroscience


Dive into the research topics of 'A conjugate gradient learning algorithm for recurrent neural networks'. Together they form a unique fingerprint.

Cite this