Step acceleration based training algorithm for feedforward neural networks

Yanlai Li, Kuanquan Wang, Dapeng Zhang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

4 Citations (Scopus)

Abstract

This paper presents a very fast step acceleration based training algorithm (SATA) for multilayer feedforward neural network training. The most outstanding virtue of this algorithm is that it does not need to calculate the gradient of the target function. In each iteration step, the computation only concentrates on the corresponding varied part. The proposed algorithm has attributes in simplicity, flexibility and feasibility, as well as high speed of convergence. Compared with the other methods, including the conventional BP, the conjugate gradient (CG), and the BP based on weight extrapolation (BPWE), many simulations have confirmed the superiority of this algorithm in terms of converging speed and computation time required.
Original languageEnglish
Pages (from-to)84-87
Number of pages4
JournalProceedings - International Conference on Pattern Recognition
Volume16
Issue number2
Publication statusPublished - 1 Dec 2002

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Step acceleration based training algorithm for feedforward neural networks'. Together they form a unique fingerprint.

Cite this