Improving Imbalanced Learning by Pre-finetuning with Data Augmentation

  • Yiwen Shi
  • , Taha Valizadeh Aslani
  • , Jing Wang
  • , Ping Ren
  • , Yi Zhang
  • , Meng Hu
  • , Liang Zhao
  • , Hualou Liang

Research output: Journal article publicationConference articleAcademic researchpeer-review

26 Citations (Scopus)

Abstract

Imbalanced data is ubiquitous in the real world, where there is an uneven distribution of classes in the datasets. Such class imbalance poses a major challenge for modern deep learning, even with the typical class-balanced approaches such as re-sampling and re-weighting. In this work, we introduced a simple training strategy, namely pre-finetuning, as a new intermediate training stage in between the pretrained model and finetuning. We leveraged the idea of data augmentation to learn an initial representation that better fits the imbalanced distribution of the domain task during the pre-finetuning stage. We tested our method on manually contrived imbalanced datasets (both two-class and multi-class) and the FDA drug labeling dataset for ADME (i.e., absorption, distribution, metabolism, and excretion) classification. We found that, compared with standard single-stage training (i.e., vanilla finetuning), our method consistently attains improved model performance by large margins. Our work demonstrated that pre-finetuning is a simple, yet effective, learning strategy for imbalanced data.

Original languageEnglish
Pages (from-to)68-82
Number of pages15
JournalProceedings of Machine Learning Research
Volume183
Publication statusPublished - Sept 2022
Externally publishedYes
Event4th International Workshop on Learning with Imbalanced Domains: Theory and Applications, LIDTA 2022 - Grenoble, France
Duration: 23 Sept 2022 → …

Keywords

  • BERT
  • Data Augmentation
  • Finetuning
  • Natural Language Processing

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Improving Imbalanced Learning by Pre-finetuning with Data Augmentation'. Together they form a unique fingerprint.

Cite this