Abstract
Auto-encoders are one kinds of promising non-probabilistic representation learning paradigms that can efficiently learn stable deterministic features. Recently, auto-encoder algorithms are drawing more and more attentions because of its attractive performance in learning insensitive representation with respect to data changes. The most representative auto-encoder algorithms are the regularized auto-encoders including contractive auto-encoder, denoising auto-encoders, and sparse auto-encoders. In this paper, we incorporate both Hessian regularization and sparsity constraints into auto-encoders and then propose a new auto-encoder algorithm called Hessian regularized sparse auto-encoders (HSAE). The advantages of the proposed HSAE lie in two folds: (1) it employs Hessian regularization to well preserve local geometry for data points; (2) it also efficiently extracts the hidden structure in the data by using sparsity constraints. Finally, we stack the single-layer auto-encoders and form a deep architecture of HSAE. To evaluate the effectiveness, we construct extensive experiments on the popular datasets including MNIST and CIFAR-10 dataset and compare the proposed HSAE with the basic auto-encoders, sparse auto-encoders, Laplacian auto-encoders and Hessian auto-encoders. The experimental results demonstrate that HSAE outperforms the related baseline algorithms.
Original language | English |
---|---|
Pages (from-to) | 59-65 |
Number of pages | 7 |
Journal | Neurocomputing |
Volume | 187 |
DOIs | |
Publication status | Published - 26 Apr 2016 |
Keywords
- Auto-encoder
- Hessian regularization
- Manifold
- Sparse representation
ASJC Scopus subject areas
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence