A new approach for pain event detection in video

Junkai Chen, Zheru Chi, Hong Fu

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

Abstract

A new approach for pain event detection in video is presented in this paper. Different from some previous works which focused on frame-based detection, we target in detecting pain events at video level. In this work, we explore the spatial information of video frames and dynamic textures of video sequences, and propose two different types of features. HOG of fiducial points (P-HOG) is employed to extract spatial features from video frames and HOG from Three Orthogonal Planes (HOG-TOP) is used to represent dynamic textures of video subsequences. After that, we apply max pooling to represent a video sequence as a global feature vector. Multiple Kernel Learning (MKL) is utilized to find an optimal fusion of the two types of features. And an SVM with multiple kernels is trained to perform the final classification. We conduct our experiments on the UNBC-McMaster Shoulder Pain dataset and achieve promising results, showing the effectiveness of our approach.
Original languageEnglish
Title of host publication2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015
PublisherIEEE
Pages250-254
Number of pages5
ISBN (Electronic)9781479999538
DOIs
Publication statusPublished - 2 Dec 2015
Event2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015 - Xi'an, China
Duration: 21 Sep 201524 Sep 2015

Conference

Conference2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015
CountryChina
CityXi'an
Period21/09/1524/09/15

Keywords

  • HOG-TOP
  • P-HOG
  • Pain event detection

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Software

Cite this