TY - GEN
T1 - Domain Adaptive Robotic Gesture Recognition with Unsupervised Kinematic-Visual Data Alignment
AU - Shi, Xueying
AU - Jin, Yueming
AU - Dou, Qi
AU - Qin, Jing
AU - Heng, Pheng Ann
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/9
Y1 - 2021/9
N2 - Automated surgical gesture recognition is of great importance in robot-assisted minimally invasive surgery. However, existing methods assume that training and testing data are from the same domain, which suffers from severe performance degradation when a domain gap exists, such as the simulator and real robot. In this paper, we propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot. It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture. Specifically, we first propose a Motion Direction Oriented Kinematics feature alignment (MDO-K) to align kinematics, which exploits temporal continuity to transfer motion directions with smaller gap rather than position values, relieving the adaptation burden. Moreover, we propose a Kinematic and Visual Relation Attention (KV-Relation-ATT) to transfer the co-occurrence signals of kinematics and vision. Such features attended by correlation similarity are more informative for enhancing domain-irreverent of the model. Two feature alignment strategies benefit the model mutually during the end-to-end learning process. We extensively evaluate our method for gesture recognition using DESK dataset with peg transfer procedure. Results show that our approach recovers the performance with great improvement gains, up to 12.91% in Accuracy and 20.16% in F1score without using any annotations in real robot.
AB - Automated surgical gesture recognition is of great importance in robot-assisted minimally invasive surgery. However, existing methods assume that training and testing data are from the same domain, which suffers from severe performance degradation when a domain gap exists, such as the simulator and real robot. In this paper, we propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot. It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture. Specifically, we first propose a Motion Direction Oriented Kinematics feature alignment (MDO-K) to align kinematics, which exploits temporal continuity to transfer motion directions with smaller gap rather than position values, relieving the adaptation burden. Moreover, we propose a Kinematic and Visual Relation Attention (KV-Relation-ATT) to transfer the co-occurrence signals of kinematics and vision. Such features attended by correlation similarity are more informative for enhancing domain-irreverent of the model. Two feature alignment strategies benefit the model mutually during the end-to-end learning process. We extensively evaluate our method for gesture recognition using DESK dataset with peg transfer procedure. Results show that our approach recovers the performance with great improvement gains, up to 12.91% in Accuracy and 20.16% in F1score without using any annotations in real robot.
UR - http://www.scopus.com/inward/record.url?scp=85124351831&partnerID=8YFLogxK
U2 - 10.1109/IROS51168.2021.9636578
DO - 10.1109/IROS51168.2021.9636578
M3 - Conference article published in proceeding or book
AN - SCOPUS:85124351831
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 9453
EP - 9460
BT - IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021
Y2 - 27 September 2021 through 1 October 2021
ER -