Energy-constrained self-training for unsupervised domain adaptation

XiaoFeng Liu, Bo Hu, Xiongchang Liu, Jun Lu, Jia You, Lingsheng Kong

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

Abstract

Unsupervised domain adaptation (UDA) aims to transfer the knowledge on a labeled source domain distribution to perform well on an unlabeled target domain. Recently, the deep self-training involves an iterative process of predicting on the target domain and then taking the confident predictions as hard pseudo-labels for retraining. However, the pseudo-labels are usually unreliable, and easily leading to deviated solutions with propagated errors. In this paper, we resort to the energy-based model and constrain the training of the unlabeled target sample with the energy function minimization objective. It can be applied as a simple additional regularization. In this framework, it is possible to gain the benefits of the energy-based model, while retaining strong discriminative performance following a plug-and-play fashion. We deliver extensive experiments on the most popular and large scale UDA benchmarks of image classification as well as semantic segmentation to demonstrate its generality and effectiveness.
Original languageEnglish
Title of host publicationProceedings of ICPR 2020 - 25th International Conference on Pattern Recognition
Pages7515-7520
Number of pages6
ISBN (Electronic)9781728188089
DOIs
Publication statusPublished - 2020

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this