Abstract
In this paper, we analyze the effect of initial conditions on a constrained anti-Hebbian learning algorithm suggested by Gao, Ahmand, and Swamy. Although their approach has a minimum memory requirement with simple computation, we demonstrate through a simple example that divergence is always possible when the initial state satisfies suitable condition. We point out that in analyzing their learning rule, a constrained differential equation has to be considered instead of the unconstrained one they have studied in their original paper. Furthermore, we analyze this constrained differential equation and prove that 1) it diverges under similar conditions and 2) there is only one stable equilibrium whose domain of attraction we have identified. Accordingly, we suggest a re-initialization approach for the learning rule, which leads to convergence and yet preserves the simplicity of the original approach with a slight increase in computation.
Original language | English |
---|---|
Pages (from-to) | 1494-1502 |
Number of pages | 9 |
Journal | IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing |
Volume | 45 |
Issue number | 11 |
DOIs | |
Publication status | Published - 1 Dec 1998 |
Keywords
- Constrainted anti-hebbian learning algorithm
- Convergence and divergence analysis
- Stochastic approximation
- Total least square 111
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering