TY - GEN
T1 - A Locomotion Recognition System Using Depth Images
AU - Yan, Tingfang
AU - Sun, Yuxiang
AU - Liu, Tingting
AU - Chcung, Chi Hong
AU - Meng, Max Qing Hu
N1 - Funding Information:
This work is partially supported by RGC GRF grant # 14205914, ITC ITF grant ITS/236/15, and Shenzhen Science and Technology Innovation project c.02.17.00601 awarded to Prof. Max Q.-H. Meng
Publisher Copyright:
© 2018 IEEE.
PY - 2018/9/10
Y1 - 2018/9/10
N2 - Powered lower-limb orthoses and prostheses are attracting an increasing amount of attention in assisting daily living activities. To safely and naturally collaborate with human users, the key technology relies on an intelligent controller to accurately decode users' movement intention. In this work, we proposed an innovative locomotion recognition system based on depth images. Composed of a feature extraction subsystem and a finite-state-machine based recognition subsystem, the proposed approach is capable of capturing both the limb movements and the terrains right in front of the user. This makes it possible to anticipate the detection of locomotion modes, especially at transition states, thus enabling the associated wearable robot to deliver a smooth and seamless assistance. Validation experiments were implemented with nine subjects to trace a track that comprised of standing, walking, stair ascending, and stair descending, for three rounds each. The results showed that in steady state, the proposed system could recognize all four locomotion tasks with approximate 100% of accuracy. Out of 216 mode transitions, 82.4% of the intended locomotion tasks can be detected before the transition happened. Thanks to its high accuracy and promising prediction performance, the proposed locomotion recognition system is expected to significantly improve the safety as well as the effectiveness of a lower-limb assistive device.
AB - Powered lower-limb orthoses and prostheses are attracting an increasing amount of attention in assisting daily living activities. To safely and naturally collaborate with human users, the key technology relies on an intelligent controller to accurately decode users' movement intention. In this work, we proposed an innovative locomotion recognition system based on depth images. Composed of a feature extraction subsystem and a finite-state-machine based recognition subsystem, the proposed approach is capable of capturing both the limb movements and the terrains right in front of the user. This makes it possible to anticipate the detection of locomotion modes, especially at transition states, thus enabling the associated wearable robot to deliver a smooth and seamless assistance. Validation experiments were implemented with nine subjects to trace a track that comprised of standing, walking, stair ascending, and stair descending, for three rounds each. The results showed that in steady state, the proposed system could recognize all four locomotion tasks with approximate 100% of accuracy. Out of 216 mode transitions, 82.4% of the intended locomotion tasks can be detected before the transition happened. Thanks to its high accuracy and promising prediction performance, the proposed locomotion recognition system is expected to significantly improve the safety as well as the effectiveness of a lower-limb assistive device.
UR - http://www.scopus.com/inward/record.url?scp=85063127086&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2018.8460514
DO - 10.1109/ICRA.2018.8460514
M3 - Conference article published in proceeding or book
AN - SCOPUS:85063127086
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 6766
EP - 6772
BT - 2018 IEEE International Conference on Robotics and Automation, ICRA 2018
PB - IEEE
T2 - 2018 IEEE International Conference on Robotics and Automation, ICRA 2018
Y2 - 21 May 2018 through 25 May 2018
ER -