Transductive Zero-Shot Action Recognition via Visually Connected Graph Convolutional Networks

Yangyang Xu, Chu Han, Jing Qin, Xuemiao Xu, Guoqiang Han, Shengfeng He

Research output: Journal article publicationJournal articleAcademic researchpeer-review

11 Citations (Scopus)


With the explosive growth of action categories, zero-shot action recognition aims to extend a well-trained model to novel/unseen classes. To bridge the large knowledge gap between seen and unseen classes, in this brief, we visually associate unseen actions with seen categories in a visually connected graph, and the knowledge is then transferred from the visual features space to semantic space via the grouped attention graph convolutional networks (GAGCNs). In particular, we extract visual features for all the actions, and a visually connected graph is built to attach seen actions to visually similar unseen categories. Moreover, the proposed grouped attention mechanism exploits the hierarchical knowledge in the graph so that the GAGCN enables propagating the visual-semantic connections from seen actions to unseen ones. We extensively evaluate the proposed method on three data sets: HMDB51, UCF101, and NTU RGB + D. Experimental results show that the GAGCN outperforms state-of-the-art methods.

Original languageEnglish
Article number9173643
Pages (from-to)3761-3769
Number of pages9
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number8
Publication statusPublished - Aug 2021


  • Action recognition
  • graph convolutional network (GCN)
  • zero-shot learning (ZSL)

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Transductive Zero-Shot Action Recognition via Visually Connected Graph Convolutional Networks'. Together they form a unique fingerprint.

Cite this