TY - JOUR
T1 - Research on Transfer Learning of Vision-based Gesture Recognition
AU - Wu, Bi Xiao
AU - Yang, Chen Guang
AU - Zhong, Jun Pei
N1 - Funding Information:
This work was supported by National Nature Science Foundation of China (NSFC) (Nos. U20A20200, 61811 530281, and 61861136009), Guangdong Regional Joint Foundation (No. 2019B1515120076), Fundamental Research for the Central Universities, and in part by the Foshan Science and Technology Innovation Team Special Project (No. 2018IT100322).
Publisher Copyright:
© 2021, The Author(s).
PY - 2021/3/8
Y1 - 2021/3/8
N2 - Gesture recognition has been widely used for human-robot interaction. At present, a problem in gesture recognition is that the researchers did not use the learned knowledge in existing domains to discover and recognize gestures in new domains. For each new domain, it is required to collect and annotate a large amount of data, and the training of the algorithm does not benefit from prior knowledge, leading to redundant calculation workload and excessive time investment. To address this problem, the paper proposes a method that could transfer gesture data in different domains. We use a red-green-blue (RGB) Camera to collect images of the gestures, and use Leap Motion to collect the coordinates of 21 joint points of the human hand. Then, we extract a set of novel feature descriptors from two different distributions of data for the study of transfer learning. This paper compares the effects of three classification algorithms, i.e., support vector machine (SVM), broad learning system (BLS) and deep learning (DL). We also compare learning performances with and without using the joint distribution adaptation (JDA) algorithm. The experimental results show that the proposed method could effectively solve the transfer problem between RGB Camera and Leap Motion. In addition, we found that when using DL to classify the data, excessive training on the source domain may reduce the accuracy of recognition in the target domain.
AB - Gesture recognition has been widely used for human-robot interaction. At present, a problem in gesture recognition is that the researchers did not use the learned knowledge in existing domains to discover and recognize gestures in new domains. For each new domain, it is required to collect and annotate a large amount of data, and the training of the algorithm does not benefit from prior knowledge, leading to redundant calculation workload and excessive time investment. To address this problem, the paper proposes a method that could transfer gesture data in different domains. We use a red-green-blue (RGB) Camera to collect images of the gestures, and use Leap Motion to collect the coordinates of 21 joint points of the human hand. Then, we extract a set of novel feature descriptors from two different distributions of data for the study of transfer learning. This paper compares the effects of three classification algorithms, i.e., support vector machine (SVM), broad learning system (BLS) and deep learning (DL). We also compare learning performances with and without using the joint distribution adaptation (JDA) algorithm. The experimental results show that the proposed method could effectively solve the transfer problem between RGB Camera and Leap Motion. In addition, we found that when using DL to classify the data, excessive training on the source domain may reduce the accuracy of recognition in the target domain.
KW - gesture recognition
KW - joint distribution adaptation (JDA)
KW - Leap Motion
KW - red-green-blue (RGB) Camera
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85102309061&partnerID=8YFLogxK
U2 - 10.1007/s11633-020-1273-9
DO - 10.1007/s11633-020-1273-9
M3 - Journal article
AN - SCOPUS:85102309061
SN - 1476-8186
VL - 18
SP - 422
EP - 431
JO - International Journal of Automation and Computing
JF - International Journal of Automation and Computing
IS - 3
ER -