This paper presents a very fast step acceleration based training algorithm (SATA) for multilayer feedforward neural network training. The most outstanding virtue of this algorithm is that it does not need to calculate the gradient of the target function. In each iteration step, the computation only concentrates on the corresponding varied part. The proposed algorithm has attributes in simplicity, flexibility and feasibility, as well as high speed of convergence. Compared with the other methods, including the conventional BP, the conjugate gradient (CG), and the BP based on weight extrapolation (BPWE), many simulations have confirmed the superiority of this algorithm in terms of converging speed and computation time required.
|Number of pages||4|
|Journal||Proceedings - International Conference on Pattern Recognition|
|Publication status||Published - 1 Dec 2002|
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition