Joint Tensor Feature Analysis for Visual Object Recognition

Wai Keung Wong, Zhihui Lai, Yong Xu, Jiajun Wen, Chu Po Ho

Research output: Journal article publicationJournal articleAcademic researchpeer-review

70 Citations (Scopus)

Abstract

Tensor-based object recognition has been widely studied in the past several years. This paper focuses on the issue of joint feature selection from the tensor data and proposes a novel method called joint tensor feature analysis (JTFA) for tensor feature extraction and recognition. In order to obtain a set of jointly sparse projections for tensor feature extraction, we define the modified within-class tensor scatter value and the modified between-class tensor scatter value for regression. The k-mode optimization technique and the L2,1-norm jointly sparse regression are combined together to compute the optimal solutions. The convergent analysis, computational complexity analysis and the essence of the proposed method/model are also presented. It is interesting to show that the proposed method is very similar to singular value decomposition on the scatter matrix but with sparsity constraint on the right singular value matrix or eigen-decomposition on the scatter matrix with sparse manner. Experimental results on some tensor datasets indicate that JTFA outperforms some well-known tensor feature extraction and selection algorithms.
Original languageEnglish
Article number6980062
Pages (from-to)2425-2436
Number of pages12
JournalIEEE Transactions on Cybernetics
Volume45
Issue number11
DOIs
Publication statusPublished - 1 Nov 2015

Keywords

  • Discriminant analysis
  • feature selection
  • object recognition
  • sparse learning

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Information Systems
  • Human-Computer Interaction
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Joint Tensor Feature Analysis for Visual Object Recognition'. Together they form a unique fingerprint.

Cite this