Neural network training using stochastic PSO

Xin Chen, Yangmin Li

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

2 Citations (Scopus)

Abstract

Particle swarm optimization is widely applied for training neural network. Since in many applications the number of weights of NN is huge, when PSO algorithms are applied for NN training, the dimension of search space is so large that PSOs always converge prematurely. In this paper an improved stochastic PSO (SPSO) is presented, to which a random velocity is added to improve particles' exploration ability. Since SPSO explores much thoroughly to collect information of solution space, it is able to find the global best solution with high opportunity. Hence SPSO is suitable for optimization about high dimension problems, especially for NN training.
Original languageEnglish
Title of host publicationNeural Information Processing - 13th International Conference, ICONIP 2006, Proceedings
PublisherSpringer Verlag
Pages1051-1060
Number of pages10
ISBN (Print)3540464816, 9783540464815
Publication statusPublished - 1 Jan 2006
Externally publishedYes
Event13th International Conference on Neural Information Processing, ICONIP 2006 - Hong Kong, Hong Kong
Duration: 3 Oct 20066 Oct 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4233 LNCS - II
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference13th International Conference on Neural Information Processing, ICONIP 2006
Country/TerritoryHong Kong
CityHong Kong
Period3/10/066/10/06

ASJC Scopus subject areas

  • General Computer Science
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'Neural network training using stochastic PSO'. Together they form a unique fingerprint.

Cite this