Abstract
The real-time recurrent learning (RTRL) algorithm, which is originally proposed for training recurrent neural networks, requires a large number of iterations for convergence because a small learning rate should be used. While an obvious solution to this problem is to use a large learning rate, this could result in undesirable convergence characteristics. This paper attempts to improve the convergence capability and convergence characteristics of the RTRL algorithm by incorporating conjugate gradient computation into its learning procedure. The resulting algorithm, referred to as the conjugate gradient recurrent learning (CGRL) algorithm, is applied to train fully connected recurrent neural networks to simulate a second-order low-pass filter and to predict the chaotic intensity pulsations of NH3laser. Results show that the CGRL algorithm exhibits substantial improvement in convergence (in terms of the reduction in mean squared error per epoch) as compared to the RTRL and batch mode RTRL algorithms.
Original language | English |
---|---|
Pages (from-to) | 173-189 |
Number of pages | 17 |
Journal | Neurocomputing |
Volume | 24 |
Issue number | 1-3 |
DOIs | |
Publication status | Published - 1 Feb 1999 |
Keywords
- Conjugate gradient
- Real time recurrent learning
- Recurrent neural networks
ASJC Scopus subject areas
- Artificial Intelligence
- Cellular and Molecular Neuroscience