Abstract
We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine (ELM)-based cross-domain network learning framework, that is called ELM-based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the l21-Norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross-domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as the base classifiers. The network output weights cannot only be analytically determined, but also transferrable. In addition, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semisupervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition demonstrate that our EDA methods outperform the existing cross-domain learning methods.
Original language | English |
---|---|
Article number | 7539280 |
Pages (from-to) | 4959-4973 |
Number of pages | 15 |
Journal | IEEE Transactions on Image Processing |
Volume | 25 |
Issue number | 10 |
DOIs | |
Publication status | Published - 1 Oct 2016 |
Keywords
- cross-domain learning
- Domain adaptation
- Extreme learning machine
- knowledge adaptation
ASJC Scopus subject areas
- Software
- Computer Graphics and Computer-Aided Design