Abstract
In this paper, a new algorithm is proposed to solve the "flat spot" problem in back-propagation networks by magnifying the gradient function. The idea of the new learning algorithm is to vary the gradient of the activation function so as to magnify the backward propagated error signal gradient function especially when the output approaches a wrong value, thus the convergence rate can be accelerated and the flat spot problem can be eliminated. Simulation results show that, in terms of the convergence rate and global search capability, the new algorithm always outperforms the other traditional methods.
Original language | English |
---|---|
Pages | 156-159 |
Number of pages | 4 |
Publication status | Published - 15 Jul 2001 |
Externally published | Yes |
Event | International Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States Duration: 15 Jul 2001 → 19 Jul 2001 |
Conference
Conference | International Joint Conference on Neural Networks (IJCNN'01) |
---|---|
Country/Territory | United States |
City | Washington, DC |
Period | 15/07/01 → 19/07/01 |
ASJC Scopus subject areas
- Software
- Artificial Intelligence