Visual-semantic graph neural network with pose-position attentive learning for group activity recognition

Tianshan Liu, Rui Zhao, Kin Man Lam, Jun Kong

Research output: Journal article publicationJournal articleAcademic researchpeer-review

15 Citations (Scopus)


Video-based group activities typically contain interactive contexts among diverse visual modalities between multiple persons, and semantic relationships between individual actions. Nevertheless, majority of the existing methods for recognizing group activity either captures the relationships among different persons by utilizing a solely RGB modality or neglect to exploit the label hierarchies between individual actions and the group activity. To tackle these issues, we propose a visual-semantic graph neural network, with pose-position attentive learning (VSGNN-PAL), for group activity recognition. Specifically, we first extract the individual-level appearance and motion representations from RGB and optical-flow inputs, to build a bi-modal visual graph. Two attentive aggregators are further proposed to integrate both the pose and position information to measure the relevance scores between persons, and dynamically refine the representation of each visual node from both modality-specific and cross-modal perspectives. To model a semantic hierarchy from a label space, we construct a semantic graph based on the linguistic embeddings of individual actions and group activity labels. We further employ a bi-directional mapping learning scheme, to integrate the label-relation-aware semantic context into the visual representations. Besides, a global reasoning module is introduced to progressively generate the group-level representations with the scene description maintained. Furthermore, we formulate a semantic-preserving loss, to maintain the consistency between the learned high-level representations and the semantics of the ground-truth labels. Experimental results on three group activity benchmarks demonstrate that the proposed method achieves state-of-the-art performance.

Original languageEnglish
Pages (from-to)217-231
Number of pages15
Publication statusPublished - 28 Jun 2022


  • Graph neural network
  • Group activity recognition
  • Pose-position attentive learning
  • Visual-semantic context

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence


Dive into the research topics of 'Visual-semantic graph neural network with pose-position attentive learning for group activity recognition'. Together they form a unique fingerprint.

Cite this