The ℓ2,1-norm stacked robust autoencoders for domain adaptation

Wenhao Jiang, Hongchang Gao, Fu Lai Korris Chung, Heng Huang

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

20 Citations (Scopus)

Abstract

Recently, deep learning methods that employ stacked denoising autoencoders (SDAs) have been successfully applied in domain adaptation. Remarkable performance in multi-domain sentiment analysis datasets has been reported, making deep learning a promising approach to domain adaptation problems. SDAs are distinguished by learning robust data representations for recovering the original features that have been artificially corrupted with noise. The idea has been further exploited to marginalize out the random corruptions by a stateof- the-art method called mSDA. In this paper, a deep learning method for domain adaptation called ℓ2,1-norm stacked robust autoencoders (ℓ2,1-SRA) is proposed to learn useful representations for domain adaptation tasks. Each layer of ℓ2,1-SRA contains two steps: a robust linear reconstruction step which is based on ℓ2,1 robust regression and a non-linear squashing transformation step. The experimental results demonstrate that the proposed method is very effective in multiple cross domain classification datasets which include Amazon review dataset, spam dataset from ECML/PKDD discovery challenge 2006 and 20 newsgroups dataset.
Original languageEnglish
Title of host publication30th AAAI Conference on Artificial Intelligence, AAAI 2016
PublisherAAAI press
Pages1723-1729
Number of pages7
ISBN (Electronic)9781577357605
Publication statusPublished - 1 Jan 2016
Event30th AAAI Conference on Artificial Intelligence, AAAI 2016 - Phoenix Convention Center, Phoenix, United States
Duration: 12 Feb 201617 Feb 2016

Conference

Conference30th AAAI Conference on Artificial Intelligence, AAAI 2016
Country/TerritoryUnited States
CityPhoenix
Period12/02/1617/02/16

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this