Parameter by parameter algorithm for multilayer perceptrons

Yanlai Li, Dapeng Zhang, Kuanquan Wang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

5 Citations (Scopus)

Abstract

This paper presents a parameter by parameter (PBP) algorithm for speeding up the training of multilayer perceptrons (MLP). This new algorithm uses an approach similar to that of the layer by layer (LBL) algorithm, taking into account the input errors of the output layer and hidden layer. The proposed PBP algorithm, however, is not burdened by the need to calculate the gradient of the error function. In each iteration step, the weights or thresholds can be optimized directly one by one with other variables fixed. Four classes of solution equations for parameters of networks are deducted. The effectiveness of the PBP algorithm is demonstrated using two benchmarks. In comparisons with the BP algorithm with momentum (BPM) and the conventional LBL algorithms, PBP obtains faster convergences and better simulation performances.
Original languageEnglish
Pages (from-to)229-242
Number of pages14
JournalNeural Processing Letters
Volume23
Issue number2
DOIs
Publication statusPublished - 1 Apr 2006

Keywords

  • BP algorithm with momentum
  • Layer by layer algorithm
  • Multilayer perceptrons
  • Parameter by parameter algorithm
  • Training algorithm

ASJC Scopus subject areas

  • Software
  • General Neuroscience
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Parameter by parameter algorithm for multilayer perceptrons'. Together they form a unique fingerprint.

Cite this