TY - GEN
T1 - Towards Scalable GPU-Accelerated SNN Training via Temporal Fusion
AU - Li, Yanchen
AU - Li, Jiachun
AU - Sun, Kebin
AU - Leng, Luziwei
AU - Cheng, Ran
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024/9
Y1 - 2024/9
N2 - Drawing on the intricate structures of the brain, Spiking Neural Networks (SNNs) emerge as a transformative development in artificial intelligence, closely emulating the complex dynamics of biological neural networks. While SNNs show promising efficiency on specialized sparse-computational hardware, their practical training often relies on conventional GPUs. This reliance frequently leads to extended computation times when contrasted with traditional Artificial Neural Networks (ANNs), presenting significant hurdles for advancing SNN research. To navigate this challenge, we present a novel temporal fusion method, specifically designed to expedite the propagation dynamics of SNNs on GPU platforms, which serves as an enhancement to the current significant approaches for handling deep learning tasks with SNNs. This method underwent thorough validation through extensive experiments in both authentic training scenarios and idealized conditions, confirming its efficacy and adaptability for single and multi-GPU systems. Benchmarked against various existing SNN libraries/implementations, our method achieved accelerations ranging from 5× to 40× on NVIDIA A100 GPUs. Publicly available experimental codes can be found at https://github.com/EMI-Group/snn-temporal-fusion.
AB - Drawing on the intricate structures of the brain, Spiking Neural Networks (SNNs) emerge as a transformative development in artificial intelligence, closely emulating the complex dynamics of biological neural networks. While SNNs show promising efficiency on specialized sparse-computational hardware, their practical training often relies on conventional GPUs. This reliance frequently leads to extended computation times when contrasted with traditional Artificial Neural Networks (ANNs), presenting significant hurdles for advancing SNN research. To navigate this challenge, we present a novel temporal fusion method, specifically designed to expedite the propagation dynamics of SNNs on GPU platforms, which serves as an enhancement to the current significant approaches for handling deep learning tasks with SNNs. This method underwent thorough validation through extensive experiments in both authentic training scenarios and idealized conditions, confirming its efficacy and adaptability for single and multi-GPU systems. Benchmarked against various existing SNN libraries/implementations, our method achieved accelerations ranging from 5× to 40× on NVIDIA A100 GPUs. Publicly available experimental codes can be found at https://github.com/EMI-Group/snn-temporal-fusion.
KW - GPU acceleration
KW - High-performance computing
KW - Spiking neural networks
UR - https://www.scopus.com/pages/publications/85205296856
U2 - 10.1007/978-3-031-72341-4_5
DO - 10.1007/978-3-031-72341-4_5
M3 - Conference article published in proceeding or book
AN - SCOPUS:85205296856
SN - 9783031723407
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 58
EP - 73
BT - Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings
A2 - Wand, Michael
A2 - Schmidhuber, Jürgen
A2 - Wand, Michael
A2 - Malinovská, Kristína
A2 - Schmidhuber, Jürgen
A2 - Tetko, Igor V.
A2 - Tetko, Igor V.
PB - Springer Science and Business Media Deutschland GmbH
T2 - 33rd International Conference on Artificial Neural Networks, ICANN 2024
Y2 - 17 September 2024 through 20 September 2024
ER -