TY - JOUR
T1 - TCGL
T2 - Temporal Contrastive Graph for Self-Supervised Video Representation Learning
AU - Liu, Yang
AU - Wang, Keze
AU - Liu, Lingbo
AU - Lan, Haoyuan
AU - Lin, Liang
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 62002395; in part by the National Natural Science Foundation of Guangdong Province, China, under Grant 2021A15150123; in part by the China Postdoctoral Science Foundationfunded project under Grant 2020M672966; and in part by the National Key Research and Development Program of China under Grant 2020AAA0109704.
Publisher Copyright:
© 1992-2012 IEEE.
PY - 2022/2
Y1 - 2022/2
N2 - Video self-supervised learning is a challenging task, which requires significant expressive power from the model to leverage rich spatial-temporal knowledge and generate effective supervisory signals from large amounts of unlabeled videos. However, existing methods fail to increase the temporal diversity of unlabeled videos and ignore elaborately modeling multi-scale temporal dependencies in an explicit way. To overcome these limitations, we take advantage of the multi-scale temporal dependencies within videos and propose a novel video self-supervised learning framework named Temporal Contrastive Graph Learning (TCGL), which jointly models the inter-snippet and intra-snippet temporal dependencies for temporal representation learning with a hybrid graph contrastive learning strategy. Specifically, a Spatial-Temporal Knowledge Discovering (STKD) module is first introduced to extract motion-enhanced spatial-temporal representations from videos based on the frequency domain analysis of discrete cosine transform. To explicitly model multi-scale temporal dependencies of unlabeled videos, our TCGL integrates the prior knowledge about the frame and snippet orders into graph structures, i.e., the intra-/inter-snippet Temporal Contrastive Graphs (TCG). Then, specific contrastive learning modules are designed to maximize the agreement between nodes in different graph views. To generate supervisory signals for unlabeled videos, we introduce an Adaptive Snippet Order Prediction (ASOP) module which leverages the relational knowledge among video snippets to learn the global context representation and recalibrate the channel-wise features adaptively. Experimental results demonstrate the superiority of our TCGL over the state-of-the-art methods on large-scale action recognition and video retrieval benchmarks. The code is publicly available at https://github.com/YangLiu9208/TCGL.
AB - Video self-supervised learning is a challenging task, which requires significant expressive power from the model to leverage rich spatial-temporal knowledge and generate effective supervisory signals from large amounts of unlabeled videos. However, existing methods fail to increase the temporal diversity of unlabeled videos and ignore elaborately modeling multi-scale temporal dependencies in an explicit way. To overcome these limitations, we take advantage of the multi-scale temporal dependencies within videos and propose a novel video self-supervised learning framework named Temporal Contrastive Graph Learning (TCGL), which jointly models the inter-snippet and intra-snippet temporal dependencies for temporal representation learning with a hybrid graph contrastive learning strategy. Specifically, a Spatial-Temporal Knowledge Discovering (STKD) module is first introduced to extract motion-enhanced spatial-temporal representations from videos based on the frequency domain analysis of discrete cosine transform. To explicitly model multi-scale temporal dependencies of unlabeled videos, our TCGL integrates the prior knowledge about the frame and snippet orders into graph structures, i.e., the intra-/inter-snippet Temporal Contrastive Graphs (TCG). Then, specific contrastive learning modules are designed to maximize the agreement between nodes in different graph views. To generate supervisory signals for unlabeled videos, we introduce an Adaptive Snippet Order Prediction (ASOP) module which leverages the relational knowledge among video snippets to learn the global context representation and recalibrate the channel-wise features adaptively. Experimental results demonstrate the superiority of our TCGL over the state-of-the-art methods on large-scale action recognition and video retrieval benchmarks. The code is publicly available at https://github.com/YangLiu9208/TCGL.
KW - graph neural networks
KW - self-supervised learning
KW - spatial-temporal data analysis
KW - Video understanding
UR - http://www.scopus.com/inward/record.url?scp=85124816207&partnerID=8YFLogxK
U2 - 10.1109/TIP.2022.3147032
DO - 10.1109/TIP.2022.3147032
M3 - Journal article
C2 - 35157584
AN - SCOPUS:85124816207
SN - 1057-7149
VL - 31
SP - 1978
EP - 1993
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -