TY - GEN
T1 - Backpropagation with two-phase magnified gradient function
AU - Cheung, Chi Chung
AU - Ng, Sin Chun
PY - 2008/6/1
Y1 - 2008/6/1
N2 - Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications have been proposed to improve the performance of BP, and BP with Magnified Gradient Function (MGFPROP) is one of the fast learning algorithms which improve both the convergence rate and the global convergence capability of BP [19]. MGFPROP outperforms many benchmarking fast learning algorithms in different adaptive problems [19]. However, the performance of MGFPROP is limited due to the error overshooting problem. This paper presents a new approach called BP with Two-Phase Magnified Gradient Function (2P-MGFPROP) to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. 2P-MGFPROP is modified from MGFPROP. It divides the learning process into two phases and adjusts the parameter setting of MGFPROP based on the nature of the phase of the learning process. Through simulation results in two different adaptive problems, 2P-MGFPROP outperforms MGFPROP with optimal parameter setting in terms of the convergence rate, and the improvement can be up to 50%.
AB - Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications have been proposed to improve the performance of BP, and BP with Magnified Gradient Function (MGFPROP) is one of the fast learning algorithms which improve both the convergence rate and the global convergence capability of BP [19]. MGFPROP outperforms many benchmarking fast learning algorithms in different adaptive problems [19]. However, the performance of MGFPROP is limited due to the error overshooting problem. This paper presents a new approach called BP with Two-Phase Magnified Gradient Function (2P-MGFPROP) to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. 2P-MGFPROP is modified from MGFPROP. It divides the learning process into two phases and adjusts the parameter setting of MGFPROP based on the nature of the phase of the learning process. Through simulation results in two different adaptive problems, 2P-MGFPROP outperforms MGFPROP with optimal parameter setting in terms of the convergence rate, and the improvement can be up to 50%.
UR - http://www.scopus.com/inward/record.url?scp=56349134905&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2008.4633873
DO - 10.1109/IJCNN.2008.4633873
M3 - Conference article published in proceeding or book
AN - SCOPUS:56349134905
SN - 9781424418213
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 710
EP - 715
BT - 2008 International Joint Conference on Neural Networks, IJCNN 2008
T2 - 2008 International Joint Conference on Neural Networks, IJCNN 2008
Y2 - 1 June 2008 through 8 June 2008
ER -