Multimodal data-based deep learning model for sitting posture recognition toward office workers’ health promotion

Xiangying Zhang, Junming Fan, Tao Peng (Corresponding Author), Pai Zheng (Corresponding Author), Xujun Zhang, Renzhong Tang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

13 Citations (Scopus)

Abstract

Recognizing sitting posture is significant to prevent the development of work-related musculoskeletal disorders for office workers. Multimodal data, i.e., infrared map and pressure map, have been leveraged to achieve accurate recognition while preserving privacy and being unobtrusive for daily use. Existing studies in sitting posture recognition utilize handcrafted features with machine learning models for multimodal data fusion, which significantly relies on domain knowledge. Therefore, a deep learning model is proposed to fuse the multimodal data and recognize the sitting posture. This model contains modality-specific backbones, a cross-modal self-attention module, and multi-task learning-based classification. Experiments are conducted to verify the effectiveness of the proposed model using 20 participants’ data, achieving a 93.08% F1-score. The high-performance result indicates that the proposed model is promising for sitting posture-related applications.

Original languageEnglish
Article number114150
Number of pages12
JournalSensors and Actuators A: Physical
Volume350
DOIs
Publication statusPublished - 1 Feb 2023

Keywords

  • Deep learning
  • Ergonomics
  • Multimodal data
  • Self-attention
  • Sitting posture recognition

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Instrumentation
  • Condensed Matter Physics
  • Surfaces, Coatings and Films
  • Metals and Alloys
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Multimodal data-based deep learning model for sitting posture recognition toward office workers’ health promotion'. Together they form a unique fingerprint.

Cite this