Towards Deeper Graph Neural Networks with Differentiable Group Normalization

Kaixiong Zhou, Xiao Huang, Yuening Li, Daochen Zha, Rui Chen, Xia Hu

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review


Graph neural networks (GNNs), which learn the representation of a node by aggregating its neighbors, have become an effective computational tool in downstream applications. Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases. It is because the stacked aggregators would make node representations converge to indistinguishable vectors. Several attempts have been made to tackle the issue by bringing linked node pairs close and unlinked pairs distinct. However, they often ignore the intrinsic community structures and would result in sub-optimal performance. The representations of nodes within the same community/class need be similar to facilitate the classification, while different classes are expected to be separated in embedding space. To bridge the gap, we introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN). It normalizes nodes within the same group independently to increase their smoothness, and separates node distributions among different groups to significantly alleviate the over-smoothing issue. Experiments on real-world datasets demonstrate that DGN makes GNN models more robust to over-smoothing and achieves better performance with deeper GNNs.
Original languageEnglish
Title of host publicationThirty-fourth Annual Conference on Neural Information Processing Systems
Publication statusPublished - 2020
EventThirty-fourth Annual Conference on Neural Information Processing Systems -
Duration: 6 Dec 202012 Dec 2020

Publication series

NameAnnual Conference on Neural Information Processing Systems


ConferenceThirty-fourth Annual Conference on Neural Information Processing Systems
Abbreviated titleNeurIPS

Cite this