TY - GEN
T1 - Training Spiking Neural Networks with Local Tandem Learning
AU - Yang, Qu
AU - Wu, Jibin
AU - Zhang, Malu
AU - Chua, Yansong
AU - Wang, Xinchao
AU - Li, Haizhou
N1 - Funding Information:
This research work is supported by IAF, A*STAR, SOITEC, NXP, National University of Singapore under FD-fAbrICS: Joint Lab for FD-SOI Always-on Intelligent & Connected Systems (Award I2001E0053), and National Research Foundation, Singapore under its Medium Sized Centre for Advanced Robotics Technology Innovation (WBS: A-0009428-09-00). This research is also supported by National Science Foundation of China (Grant Number: 62106038), Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen, China (Grant No. B10120210117-KP02), Shenzhen Research Institute of Big Data, the National Key Research and Development Program of China (Grant No. 2021ZD0200300), CCF-Hikvision Open Fund, and The Hong Kong Polytechnic University under Grant No. P0043563.
Publisher Copyright:
© 2022 Neural information processing systems foundation. All rights reserved.
PY - 2022/11/28
Y1 - 2022/11/28
N2 - Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips.
AB - Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips.
UR - https://www.scopus.com/pages/publications/85163070570
M3 - Conference article published in proceeding or book
AN - SCOPUS:85163070570
T3 - Advances in Neural Information Processing Systems
SP - 1
EP - 21
BT - Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Koyejo, S.
A2 - Mohamed, S.
A2 - Agarwal, A.
A2 - Belgrave, D.
A2 - Cho, K.
A2 - Oh, A.
PB - Neural information processing systems foundation
T2 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Y2 - 28 November 2022 through 9 December 2022
ER -