Video action recognition with spatio-temporal graph embedding and spline modeling

Yin Yuan, Haomian Zheng, Zhu Li, Dapeng Zhang

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

9 Citations (Scopus)

Abstract

In recent years, video analysis and event recognition are becoming a popular research topic with wide applications in surveillance and security. In this paper, we proposed a video action appearance modeling based on spatio-temporal graph embedding and video action recognition based on video luminance field trajectory spline modeling and aligned matching. Graphs are computed from spline re-sampling of training video data set. Matching is achieved from minimizing the average projection distance between query clips and training groups. Simulation with the Cambridge hand gesture data set demonstrates the effectiveness of the proposed solution.
Original languageEnglish
Title of host publication2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings
Pages2422-2425
Number of pages4
DOIs
Publication statusPublished - 8 Nov 2010
Event2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Dallas, TX, United States
Duration: 14 Mar 201019 Mar 2010

Conference

Conference2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010
Country/TerritoryUnited States
CityDallas, TX
Period14/03/1019/03/10

Keywords

  • Appearance modeling
  • Graph embedding
  • Spline modeling
  • Video event analysis

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Cite this