Magnified gradient function to improve first-order gradient-based learning algorithms

Sin Chun Ng, Chi Chung Cheung, Andrew Kwok Fai Lui, Shensheng Xu

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

1 Citation (Scopus)

Abstract

In this paper, we propose a new approach to improve the performance of existing first-order gradient-based fast learning algorithms in terms of speed and global convergence capability. The idea is to magnify the gradient terms of the activation function so that fast learning speed and global convergence can be achieved. The approach can be applied to existing gradient-based algorithms. Simulation results show that this approach can significantly speed up the convergence rate and increase the global convergence capability of existing popular first-order gradient-based fast learning algorithms for multi-layer feed-forward neural networks.

Original languageEnglish
Title of host publicationAdvances in Neural Networks, ISNN 2012 - 9th International Symposium on Neural Networks, Proceedings
Pages448-457
Number of pages10
EditionPART 1
DOIs
Publication statusPublished - 11 Jul 2012
Event9th International Symposium on Neural Networks, ISNN 2012 - Shenyang, China
Duration: 11 Jul 201214 Jul 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume7367 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference9th International Symposium on Neural Networks, ISNN 2012
Country/TerritoryChina
CityShenyang
Period11/07/1214/07/12

Keywords

  • backpropagation
  • gradient-based algorithms
  • magnified gradient function
  • Quickprop
  • Rprop

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this