Fast-PADMA: Rapidly Adapting Facial Affect Model from Similar Individuals

Michael Xuelin Huang, Jiajia Li, Grace Ngai, Hong Va Leong, Kien A. Hua

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)

Abstract

A user-specific model generally performs better in facial affect recognition. Existing solutions, however, have usability issues since the annotation can be long and tedious for the end users (e.g., consumers). We address this critical issue by presenting a more user-friendly user-adaptive model to make the personalized approach more practical. This paper proposes a novel user-adaptive model, which we have called fast-Personal Affect Detection with Minimal Annotation (Fast-PADMA). Fast-PADMA integrates data from multiple source subjects with a small amount of data from the target subject. Collecting this target subject data is feasible since fast-PADMA requires only one self-reported affect annotation per facial video segment. To alleviate overfitting in this context of limited individual training data, we propose an efficient bootstrapping technique, which strengthens the contribution of multiple similar source subjects. Specifically, we employ an ensemble classifier to construct pretrained weak generic classifiers from data of multiple source subjects, which is weighted according to the available data from the target user. The result is a model that does not require expensive computation, such as distribution dissimilarity calculation or model retraining. We evaluate our method with in-depth experimental evaluations on five publicly available facial datasets, with results that compare favorably with the state-of-the-art performance on classifying pain, arousal, and valence. Our findings show that fast-PADMA is effective at rapidly constructing a user-adaptive model that outperforms both its generic and user-specific counterparts. This efficient technique has the potential to significantly improve user-adaptive facial affect recognition for personal use and, therefore, enable comprehensive affect-aware applications.
Original languageEnglish
Pages (from-to)1901-1915
Number of pages15
JournalIEEE Transactions on Multimedia
Volume20
Issue number7
DOIs
Publication statusPublished - 1 Jul 2018

Keywords

  • Affective computing
  • facial affect
  • rapid modeling
  • user-adaptive model

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this