TY - GEN
T1 - Tensorized NeuroEvolution of Augmenting Topologies for GPU Acceleration
AU - Wang, Lishuang
AU - Zhao, Mengfei
AU - Liu, Enyu
AU - Sun, Kebin
AU - Cheng, Ran
N1 - Publisher Copyright:
© 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/7/14
Y1 - 2024/7/14
N2 - The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution. Its effectiveness is derived from initiating with simple networks and incrementally evolving both their topologies and weights. Although its capability across various challenges is evident, the algorithm's computational efficiency remains an impediment, limiting its scalability potential. In response, this paper introduces a tensorization method for the NEAT algorithm, enabling the transformation of its diverse network topologies and associated operations into uniformly shaped tensors for computation. This advancement facilitates the execution of the NEAT algorithm in a parallelized manner across the entire population. Furthermore, we develop TensorNEAT, a library that implements the tensorized NEAT algorithm and its variants, such as CPPN and HyperNEAT. Building upon JAX, TensorNEAT promotes efficient parallel computations via automated function vectorization and hardware acceleration. Moreover, the TensorNEAT library supports various benchmark environments including Gym, Brax, and gymnax. Through evaluations across a spectrum of robotics control environments in Brax, TensorNEAT achieves up to 500x speedups compared to the existing implementations such as NEAT-Python. Source codes are available at: https://github.com/EMI-Group/tensorneat.
AB - The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution. Its effectiveness is derived from initiating with simple networks and incrementally evolving both their topologies and weights. Although its capability across various challenges is evident, the algorithm's computational efficiency remains an impediment, limiting its scalability potential. In response, this paper introduces a tensorization method for the NEAT algorithm, enabling the transformation of its diverse network topologies and associated operations into uniformly shaped tensors for computation. This advancement facilitates the execution of the NEAT algorithm in a parallelized manner across the entire population. Furthermore, we develop TensorNEAT, a library that implements the tensorized NEAT algorithm and its variants, such as CPPN and HyperNEAT. Building upon JAX, TensorNEAT promotes efficient parallel computations via automated function vectorization and hardware acceleration. Moreover, the TensorNEAT library supports various benchmark environments including Gym, Brax, and gymnax. Through evaluations across a spectrum of robotics control environments in Brax, TensorNEAT achieves up to 500x speedups compared to the existing implementations such as NEAT-Python. Source codes are available at: https://github.com/EMI-Group/tensorneat.
KW - algorithm library
KW - GPU acceleration
KW - neuroevolution
UR - https://www.scopus.com/pages/publications/85206904626
U2 - 10.1145/3638529.3654210
DO - 10.1145/3638529.3654210
M3 - Conference article published in proceeding or book
AN - SCOPUS:85206904626
T3 - GECCO 2024 - Proceedings of the 2024 Genetic and Evolutionary Computation Conference
SP - 1156
EP - 1164
BT - GECCO 2024 - Proceedings of the 2024 Genetic and Evolutionary Computation Conference
PB - Association for Computing Machinery, Inc
T2 - 2024 Genetic and Evolutionary Computation Conference, GECCO 2024
Y2 - 14 July 2024 through 18 July 2024
ER -