TY - GEN
T1 - Deep multi-task learning for facial expression recognition and synthesis based on selective feature sharing
AU - Zhao, Rui
AU - Liu, Tianshan
AU - Xiao, Jun
AU - Lun, Daniel P.K.
AU - Lam, Kin Man
N1 - Funding Information:
The work described in this article was supported by the GRF Grant PolyU 15217719 (project code: Q73V) of the Hong Kong Research Grants Council.
Publisher Copyright:
© 2020 IEEE
PY - 2021/1
Y1 - 2021/1
N2 - Multi-task learning is an effective learning strategy for deep-learning-based facial expression recognition tasks. However, most existing methods take into limited consideration the feature selection, when transferring information between different tasks, which may lead to task interference when training the multi-task networks. To address this problem, we propose a novel selective feature-sharing method, and establish a multi-task network for facial expression recognition and facial expression synthesis. The proposed method can effectively transfer beneficial features between different tasks, while filtering out useless and harmful information. Moreover, we employ the facial expression synthesis task to enlarge and balance the training dataset to further enhance the generalization ability of the proposed method. Experimental results show that the proposed method achieves state-of-the-art performance on those commonly used facial expression recognition benchmarks, which makes it a potential solution to real-world facial expression recognition problems.
AB - Multi-task learning is an effective learning strategy for deep-learning-based facial expression recognition tasks. However, most existing methods take into limited consideration the feature selection, when transferring information between different tasks, which may lead to task interference when training the multi-task networks. To address this problem, we propose a novel selective feature-sharing method, and establish a multi-task network for facial expression recognition and facial expression synthesis. The proposed method can effectively transfer beneficial features between different tasks, while filtering out useless and harmful information. Moreover, we employ the facial expression synthesis task to enlarge and balance the training dataset to further enhance the generalization ability of the proposed method. Experimental results show that the proposed method achieves state-of-the-art performance on those commonly used facial expression recognition benchmarks, which makes it a potential solution to real-world facial expression recognition problems.
UR - http://www.scopus.com/inward/record.url?scp=85110481410&partnerID=8YFLogxK
U2 - 10.1109/ICPR48806.2021.9413000
DO - 10.1109/ICPR48806.2021.9413000
M3 - Conference article published in proceeding or book
AN - SCOPUS:85110481410
T3 - Proceedings - International Conference on Pattern Recognition
SP - 4412
EP - 4419
BT - Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 25th International Conference on Pattern Recognition, ICPR 2020
Y2 - 10 January 2021 through 15 January 2021
ER -