Network representation learning (NRL) aims to map vertices of a network into a low-dimensional space which preserves the network structure and its inherent properties. Most existing methods for network representation adopt shallow models which have relatively limited capacity to capture highly non-linear network structures, resulting in sub-optimal network representations. Therefore, it is nontrivial to explore how to effectively capture highly non-linear network structure and preserve the global and local structure in NRL. To solve this problem, in this paper we propose a new graph convolutional autoencoder architecture based on a depth-based representation of graph structure, referred to as the depth-based subgraph convolutional autoencoder (DS-CAE), which integrates both the global topological and local connectivity structures within a graph. Our idea is to first decompose a graph into a family of K-layer expansion subgraphs rooted at each vertex aimed at better capturing long-range vertex inter-dependencies. Then a set of convolution filters slide over the entire sets of subgraphs of a vertex to extract the local structural connectivity information. This is analogous to the standard convolution operation on grid data. In contrast to most existing models for unsupervised learning on graph-structured data, our model can capture highly non-linear structure by simultaneously integrating node features and network structure into network representation learning. This significantly improves the predictive performance on a number of benchmark datasets.
- Graph convolutional neural network
- Network representation learning
- Node classification
ASJC Scopus subject areas
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence