Mixing Up Real Samples and Adversarial Samples for Semi-Supervised Learning

Yun Ma, Xudong Mao, Yangbin Chen, Qing Li

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review


Consistency regularization methods have shown great success in semi-supervised learning tasks. Most existing methods focus on either the local neighborhood or in-between neighborhood of training samples to enforce the consistency constraint. In this paper, we propose a novel generalized framework called Adversarial Mixup (AdvMixup), which unifies the local and in-between neighborhood approaches by defining a virtual data distribution along the paths between the training samples and adversarial samples. Experimental results on both synthetic data and benchmark datasets exhibit that our AdvMixup can achieve better performance and robustness than state-of-the-art methods for semi-supervised learning.
Original languageEnglish
Title of host publicationProceedings of the International Joint Conference on Neural Networks (IJCNN)
Place of PublicationGlasgow, United Kingdom
Number of pages8
Publication statusPublished - 19 Jul 2020


Dive into the research topics of 'Mixing Up Real Samples and Adversarial Samples for Semi-Supervised Learning'. Together they form a unique fingerprint.

Cite this