TY - JOUR
T1 - ROA
T2 - A Rapid Learning Scheme for In-Situ Memristor Networks
AU - Zhang, Wenli
AU - Wang, Yaoyuan
AU - Ji, Xinglong
AU - Wu, Yujie
AU - Zhao, Rong
N1 - Publisher Copyright:
© Copyright © 2021 Zhang, Wang, Ji, Wu and Zhao.
PY - 2021/10/15
Y1 - 2021/10/15
N2 - Memristors show great promise in neuromorphic computing owing to their high-density integration, fast computing and low-energy consumption. However, the non-ideal update of synaptic weight in memristor devices, including nonlinearity, asymmetry and device variation, still poses challenges to the in-situ learning of memristors, thereby limiting their broad applications. Although the existing offline learning schemes can avoid this problem by transferring the weight optimization process into cloud, it is difficult to adapt to unseen tasks and uncertain environments. Here, we propose a bi-level meta-learning scheme that can alleviate the non-ideal update problem, and achieve fast adaptation and high accuracy, named Rapid One-step Adaption (ROA). By introducing a special regularization constraint and a dynamic learning rate strategy for in-situ learning, the ROA method effectively combines offline pre-training and online rapid one-step adaption. Furthermore, we implemented it on memristor-based neural networks to solve few-shot learning tasks, proving its superiority over the pure offline and online schemes under noisy conditions. This method can solve in-situ learning in non-ideal memristor networks, providing potential applications of on-chip neuromorphic learning and edge computing.
AB - Memristors show great promise in neuromorphic computing owing to their high-density integration, fast computing and low-energy consumption. However, the non-ideal update of synaptic weight in memristor devices, including nonlinearity, asymmetry and device variation, still poses challenges to the in-situ learning of memristors, thereby limiting their broad applications. Although the existing offline learning schemes can avoid this problem by transferring the weight optimization process into cloud, it is difficult to adapt to unseen tasks and uncertain environments. Here, we propose a bi-level meta-learning scheme that can alleviate the non-ideal update problem, and achieve fast adaptation and high accuracy, named Rapid One-step Adaption (ROA). By introducing a special regularization constraint and a dynamic learning rate strategy for in-situ learning, the ROA method effectively combines offline pre-training and online rapid one-step adaption. Furthermore, we implemented it on memristor-based neural networks to solve few-shot learning tasks, proving its superiority over the pure offline and online schemes under noisy conditions. This method can solve in-situ learning in non-ideal memristor networks, providing potential applications of on-chip neuromorphic learning and edge computing.
KW - fast adaptation
KW - few-shot learning
KW - memristor
KW - meta-learning
KW - neuromorphic
UR - https://www.scopus.com/pages/publications/85117793782
U2 - 10.3389/frai.2021.692065
DO - 10.3389/frai.2021.692065
M3 - Journal article
AN - SCOPUS:85117793782
SN - 2624-8212
VL - 4
JO - Frontiers in Artificial Intelligence
JF - Frontiers in Artificial Intelligence
M1 - 692065
ER -