The multi-phase method in fast learning algorithms

Chi Chung Cheung, Sin Chun Ng

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

2 Citations (Scopus)


Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications that have been proposed to improve the performance of BP have focused on solving the "flat spot" problem to increase the convergence rate. However, their performance is limited due to the error overshooting problem, In [20], a novel approach called BP with Two-Phase Magnified Gradient Function (2P-MGFPROP) was introduced to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. In this paper, this approach is further enhanced by proposing to divide the learning process into multiple phases, and different fast learning algorithms are assigned in different phases to improve the convergence rate in different adaptive problems. Through the performance investigation, it is found that the convergence rate can be increased up to two times, compared with existing fast learning algorithms.

Original languageEnglish
Title of host publication2009 International Joint Conference on Neural Networks, IJCNN 2009
Number of pages8
Publication statusPublished - 14 Jun 2009
Event2009 International Joint Conference on Neural Networks, IJCNN 2009 - Atlanta, GA, United States
Duration: 14 Jun 200919 Jun 2009

Publication series

NameProceedings of the International Joint Conference on Neural Networks


Conference2009 International Joint Conference on Neural Networks, IJCNN 2009
Country/TerritoryUnited States
CityAtlanta, GA

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this