Hard negative generation for identity-disentangled facial expression recognition

Xiaofeng Liu, B.V.K.Vijaya Kumar, Jia You, Ping Jia

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

Various factors such as identity-specific attributes, pose, illumination and expression affect the appearance of face images. Disentangling the identity-specific factors is potentially beneficial for facial expression recognition (FER). Existing image-based FER systems either use hand-crafted or learned features to represent a single face image. In this paper, we propose a novel FER framework, named identity-disentangled facial expression recognition machine (IDFERM), in which we untangle the identity from a query sample by exploiting its difference from its references (e.g., its mined or generated frontal and neutral normalized faces). We demonstrate a possible ‘recognition via generation’ scheme which consists of a novel hard negative generation (HNG) network and a generalized radial metric learning (RML) network. For FER, generated normalized faces are used as hard negative samples for metric learning. The difficulty of threshold validation and anchor selection are alleviated in RML and its distance comparisons are fewer than those of traditional deep metric learning methods. The expression representations of RML achieve superior performance on the CK + , MMI and Oulu-CASIA datasets, given a single query image for testing.
Original languageEnglish
Pages (from-to)1-22
JournalPattern Recognition
Volume88
DOIs
Publication statusPublished - 2019

Cite this