TY - GEN
T1 - Construct 3D Hand Skeleton with Commercial WiFi
AU - Ji, Sijie
AU - Zhang, Xuanye
AU - Zheng, Yuanqing
AU - Li, Mo
N1 - Publisher Copyright:
© 2023 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
PY - 2023/11/12
Y1 - 2023/11/12
N2 - This paper presents HandFi, which constructs hand skeletons with practical WiFi devices. Unlike previous WiFi hand sensing systems that primarily employ predefined gestures for pattern matching, by constructing the hand skeleton, HandFi can enable a variety of downstream WiFi-based hand sensing applications in gaming, healthcare, and smart homes. Deriving the skeleton from WiFi signals is challenging, especially because the palm is a dominant reflector compared with fingers. HandFi develops a novel multi-task learning neural network with a series of customized loss functions to capture the low-level hand information from WiFi signals. During offline training, HandFi takes raw WiFi signals as input and uses the leap motion to provide supervision. During online use, only with commercial WiFi, HandFi is capable of producing 2D hand masks as well as 3D hand poses. We demonstrate that HandFi can serve as a foundation model to enable developers to build various applications such as finger tracking and sign language recognition, and outperform existing WiFi-based solutions. Artifacts can be found: https://github.com/SIJIEJI/HandFi
AB - This paper presents HandFi, which constructs hand skeletons with practical WiFi devices. Unlike previous WiFi hand sensing systems that primarily employ predefined gestures for pattern matching, by constructing the hand skeleton, HandFi can enable a variety of downstream WiFi-based hand sensing applications in gaming, healthcare, and smart homes. Deriving the skeleton from WiFi signals is challenging, especially because the palm is a dominant reflector compared with fingers. HandFi develops a novel multi-task learning neural network with a series of customized loss functions to capture the low-level hand information from WiFi signals. During offline training, HandFi takes raw WiFi signals as input and uses the leap motion to provide supervision. During online use, only with commercial WiFi, HandFi is capable of producing 2D hand masks as well as 3D hand poses. We demonstrate that HandFi can serve as a foundation model to enable developers to build various applications such as finger tracking and sign language recognition, and outperform existing WiFi-based solutions. Artifacts can be found: https://github.com/SIJIEJI/HandFi
KW - 3D hand pose
KW - gesture recognition
KW - multi-task learning
KW - wireless sensing
UR - http://www.scopus.com/inward/record.url?scp=85185695282&partnerID=8YFLogxK
U2 - 10.1145/3625687.3625812
DO - 10.1145/3625687.3625812
M3 - Conference article published in proceeding or book
AN - SCOPUS:85185695282
T3 - SenSys 2023 - Proceedings of the 21st ACM Conference on Embedded Networked Sensors Systems
SP - 322
EP - 334
BT - SenSys 2023 - Proceedings of the 21st ACM Conference on Embedded Networked Sensors Systems
PB - Association for Computing Machinery, Inc
T2 - 21st ACM Conference on Embedded Networked Sensors Systems, SenSys 2023
Y2 - 13 November 2023 through 15 November 2023
ER -