TY - GEN
T1 - Benchmarking Neural Decoding Backbones Towards Enhanced On-Edge iBCI Applications
AU - Zhou, Zhou
AU - He, Guohang
AU - Zhang, Zheng
AU - Leng, Luziwei
AU - Guo, Qinghai
AU - Liao, Jianxing
AU - Song, Xuan
AU - Cheng, Ran
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025/4
Y1 - 2025/4
N2 - Traditional invasive Brain-Computer Interfaces (iBCIs) typically depend on neural decoding processes conducted on workstations within laboratory settings, which prevents their everyday usage. Implementing these decoding processes on edge devices, such as the wearables, introduces considerable challenges related to computational demands, processing speed, and maintaining accuracy. This study seeks to identify an optimal neural decoding backbone that boasts robust performance and swift inference capabilities suitable for edge deployment. We executed a series of neural decoding experiments involving nonhuman primates engaged in random reaching tasks, evaluating four prospective models, Gated Recurrent Unit (GRU), Transformer, Receptance Weighted Key Value (RWKV), and Selective State Space model (Mamba), across several metrics: single-session decoding, multi-session decoding, new session fine-tuning, inference speed, calibration speed, and scalability. The findings indicate that although the GRU model delivers sufficient accuracy, the RWKV and Mamba models are preferable due to their superior inference and calibration speeds. Additionally, RWKV and Mamba comply with the scaling law, demonstrating improved performance with larger data sets and increased model sizes, whereas GRU shows less pronounced scalability, and the Transformer model requires computational resources that scale prohibitively. This paper presents a thorough comparative analysis of the four models in various scenarios. The results are pivotal in pinpointing an optimal backbone that can handle increasing data volumes and is viable for edge implementation. This analysis provides essential insights for ongoing research and practical applications in the field.
AB - Traditional invasive Brain-Computer Interfaces (iBCIs) typically depend on neural decoding processes conducted on workstations within laboratory settings, which prevents their everyday usage. Implementing these decoding processes on edge devices, such as the wearables, introduces considerable challenges related to computational demands, processing speed, and maintaining accuracy. This study seeks to identify an optimal neural decoding backbone that boasts robust performance and swift inference capabilities suitable for edge deployment. We executed a series of neural decoding experiments involving nonhuman primates engaged in random reaching tasks, evaluating four prospective models, Gated Recurrent Unit (GRU), Transformer, Receptance Weighted Key Value (RWKV), and Selective State Space model (Mamba), across several metrics: single-session decoding, multi-session decoding, new session fine-tuning, inference speed, calibration speed, and scalability. The findings indicate that although the GRU model delivers sufficient accuracy, the RWKV and Mamba models are preferable due to their superior inference and calibration speeds. Additionally, RWKV and Mamba comply with the scaling law, demonstrating improved performance with larger data sets and increased model sizes, whereas GRU shows less pronounced scalability, and the Transformer model requires computational resources that scale prohibitively. This paper presents a thorough comparative analysis of the four models in various scenarios. The results are pivotal in pinpointing an optimal backbone that can handle increasing data volumes and is viable for edge implementation. This analysis provides essential insights for ongoing research and practical applications in the field.
KW - Brain-computer interfaces
KW - Deep neural networks
KW - Neural decoding
UR - https://www.scopus.com/pages/publications/105003633756
U2 - 10.1007/978-981-96-4001-0_13
DO - 10.1007/978-981-96-4001-0_13
M3 - Conference article published in proceeding or book
AN - SCOPUS:105003633756
SN - 9789819640003
T3 - Communications in Computer and Information Science
SP - 192
EP - 206
BT - Human Brain and Artificial Intelligence - 4th International Workshop, HBAI 2024, Proceedings
A2 - Liu, Quanying
A2 - Qu, Youzhi
A2 - Wu, Haiyan
A2 - Qi, Yu
A2 - Zeng, An
A2 - Pan, Dan
PB - Springer Science and Business Media Deutschland GmbH
T2 - 4th International Workshop on Human Brain and Artificial Intelligence, HBAI 2024
Y2 - 3 August 2024 through 3 August 2024
ER -