Exploring the effects of Lamarckian and Baldwinian learning in evolving recurrent neural networks

Kim W C Ku, Man Wai Mak

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

43 Citations (Scopus)


A drawback of using genetic algorithms (GAs) to train recurrent neural networks is that it takes a large number of generations to evolve the networks into an optimal solution. In order to reduce the number of generations taken, the Lamarckian learning mechanism and the Baldwinian learning mechanism are embedded into a cellular GA. This paper investigates the effects of these two learning mechanisms on the convergence performance of the cellular GA. The criteria that make learning useful to GAs are also discussed. The results show that the Lamarckian mechanism is able to assist the cellular GA, while the Baldwinian mechanism fails to do so. In addition to reducing the number of generations taken, we have found that it is also possible to reduce the time taken by embedding learning into the cellular GA in an appropriate manner.
Original languageEnglish
Title of host publicationProceedings of the IEEE Conference on Evolutionary Computation, ICEC
Number of pages5
Publication statusPublished - 1 Jan 1997
EventProceedings of the 1997 IEEE International Conference on Evolutionary Computation, ICEC'97 - Indianapolis, IN, United States
Duration: 13 Apr 199716 Apr 1997


ConferenceProceedings of the 1997 IEEE International Conference on Evolutionary Computation, ICEC'97
Country/TerritoryUnited States
CityIndianapolis, IN

ASJC Scopus subject areas

  • Computer Science(all)
  • Engineering(all)

Cite this