TY - GEN
T1 - Deep Learning of CSI for Efficient Device-free Human Activity Recognition
AU - Khan, Danista
AU - Ho, Ivan Wang Hei
N1 - Funding Information:
This work was supported in part by the Key-Area Research and Development Program of Guangdong Province (2020), Project No. 76.
Publisher Copyright:
© 2021 IEEE.
PY - 2021/6/14
Y1 - 2021/6/14
N2 - Over the years, wireless sensing is gaining popularity in the applications of indoor localization and human activity recognition (HAR). As wireless signals are sensitive to human motion, they reflect and scatter in different directions depending on the activities performed by people. The channel state information (CSI) stores the combined effect of changes in the environment, and such stored pattern is utilized to recognize different human activities such as walking, standing, and sitting. Prior studies on activity recognition mostly differentiate human activities by classifying one complete series into an activity. However, these approaches require massive datasets to give accurate results in real-time scenarios, and the classification is in fact based on short-term activity samples instead of the complete activity series. In this paper, highly accurate sample-level activity recognition is achieved by exploiting a special type of convolutional neural network (CNN), U-Net. The data collection setup does not require manual feature extraction and can efficiently classify short-term activity samples. Our experimental results indicate that the proposed architecture can classify different levels of human activities with an accuracy of 98.57%, which outperforms conventional Deep Neural Network by 14.67% for the same dataset.
AB - Over the years, wireless sensing is gaining popularity in the applications of indoor localization and human activity recognition (HAR). As wireless signals are sensitive to human motion, they reflect and scatter in different directions depending on the activities performed by people. The channel state information (CSI) stores the combined effect of changes in the environment, and such stored pattern is utilized to recognize different human activities such as walking, standing, and sitting. Prior studies on activity recognition mostly differentiate human activities by classifying one complete series into an activity. However, these approaches require massive datasets to give accurate results in real-time scenarios, and the classification is in fact based on short-term activity samples instead of the complete activity series. In this paper, highly accurate sample-level activity recognition is achieved by exploiting a special type of convolutional neural network (CNN), U-Net. The data collection setup does not require manual feature extraction and can efficiently classify short-term activity samples. Our experimental results indicate that the proposed architecture can classify different levels of human activities with an accuracy of 98.57%, which outperforms conventional Deep Neural Network by 14.67% for the same dataset.
KW - Channel State Information (CSI)
KW - Deep Neural Networks (DNN)
KW - Human Activity Recognition (HAR)
UR - http://www.scopus.com/inward/record.url?scp=85119891654&partnerID=8YFLogxK
U2 - 10.1109/WF-IoT51360.2021.9595661
DO - 10.1109/WF-IoT51360.2021.9595661
M3 - Conference article published in proceeding or book
AN - SCOPUS:85119891654
T3 - 7th IEEE World Forum on Internet of Things, WF-IoT 2021
SP - 19
EP - 24
BT - 7th IEEE World Forum on Internet of Things, WF-IoT 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th IEEE World Forum on Internet of Things, WF-IoT 2021
Y2 - 14 June 2021 through 31 July 2021
ER -