TY - GEN
T1 - LD²: Scalable Heterophilous Graph Neural Network with Decoupled Embeddings
AU - Liao, Ningyi
AU - Li, Xiang
AU - Luo, Siqiang
AU - Shi, Jieming
N1 - Publisher Copyright:
© 2023 Neural information processing systems foundation. All rights reserved.
PY - 2023/9
Y1 - 2023/9
N2 - Heterophilous Graph Neural Network (GNN) is a family of GNNs that specializes in learning graphs under heterophily, where connected nodes tend to have different labels. Most existing heterophilous models incorporate iterative non-local computations to capture node relationships. However, these approaches have limited application to large-scale graphs due to their high computational costs and challenges in adopting minibatch schemes. In this work, we study the scalability issues of heterophilous GNN and propose a scalable model, LD2, which simplifies the learning process by decoupling graph propagation and generating expressive embeddings prior to training. Theoretical analysis demonstrates that LD2 achieves optimal time complexity in training, as well as a memory footprint that remains independent of the graph scale. We conduct extensive experiments to showcase that our model is capable of lightweight minibatch training on large-scale heterophilous graphs, with up to 15× speed improvement and efficient memory utilization, while maintaining comparable or better performance than the baselines. Our code is available at: https://github.com/gdmnl/LD2.
AB - Heterophilous Graph Neural Network (GNN) is a family of GNNs that specializes in learning graphs under heterophily, where connected nodes tend to have different labels. Most existing heterophilous models incorporate iterative non-local computations to capture node relationships. However, these approaches have limited application to large-scale graphs due to their high computational costs and challenges in adopting minibatch schemes. In this work, we study the scalability issues of heterophilous GNN and propose a scalable model, LD2, which simplifies the learning process by decoupling graph propagation and generating expressive embeddings prior to training. Theoretical analysis demonstrates that LD2 achieves optimal time complexity in training, as well as a memory footprint that remains independent of the graph scale. We conduct extensive experiments to showcase that our model is capable of lightweight minibatch training on large-scale heterophilous graphs, with up to 15× speed improvement and efficient memory utilization, while maintaining comparable or better performance than the baselines. Our code is available at: https://github.com/gdmnl/LD2.
UR - http://www.scopus.com/inward/record.url?scp=85180680851&partnerID=8YFLogxK
M3 - Conference article published in proceeding or book
AN - SCOPUS:85180680851
VL - 36
T3 - Advances in Neural Information Processing Systems
SP - 1
EP - 13
BT - Advances in Neural Information Processing Systems. NeurIPS 2023
T2 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023
Y2 - 10 December 2023 through 16 December 2023
ER -