TY - JOUR
T1 - Neural Network-Based Information Transfer for Dynamic Optimization
AU - Liu, Xiao Fang
AU - Zhan, Zhi Hui
AU - Gu, Tian Long
AU - Kwong, Sam
AU - Lu, Zhenyu
AU - Duh, Henry Been Lirn
AU - Zhang, Jun
N1 - Funding Information:
Manuscript received August 28, 2018; revised March 12, 2019 and May 28, 2019; accepted May 28, 2019. Date of publication July 19, 2019; date of current version May 1, 2020. This work was supported in part by the Outstanding Youth Science Foundation under Grant 61822602, in part by the National Natural Science Foundations of China (NSFC) under Grant 61772207, Grant 61873097, and Grant 61773220, in part by the Natural Science Foundations of Guangdong Province for Distinguished Young Scholars under Grant 2014A030306038, in part by the Guangdong Natural Science Foundation Research Team under Grant 2018B030312003, in part by the Guangdong-Hong Kong Joint Innovation Platform under Grant 2018B050502006, and in part by the Hong Kong GRF-RGC General Research Fund 9042489 (CityU 11206317). (Corresponding authors: Zhi-Hui Zhan; Jun Zhang.) X.-F. Liu is with the School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China.
Publisher Copyright:
© 2012 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - In dynamic optimization problems (DOPs), as the environment changes through time, the optima also dynamically change. How to adapt to the dynamic environment and quickly find the optima in all environments is a challenging issue in solving DOPs. Usually, a new environment is strongly relevant to its previous environment. If we know how it changes from the previous environment to the new one, then we can transfer the information of the previous environment, e.g., past solutions, to get new promising information of the new environment, e.g., new high-quality solutions. Thus, in this paper, we propose a neural network (NN)-based information transfer method, named NNIT, to learn the transfer model of environment changes by NN and then use the learned model to reuse the past solutions. When the environment changes, NNIT first collects the solutions from both the previous environment and the new environment and then uses an NN to learn the transfer model from these solutions. After that, the NN is used to transfer the past solutions to new promising solutions for assisting the optimization in the new environment. The proposed NNIT can be incorporated into population-based evolutionary algorithms (EAs) to solve DOPs. Several typical state-of-the-art EAs for DOPs are selected for comprehensive study and evaluated using the widely used moving peaks benchmark. The experimental results show that the proposed NNIT is promising and can accelerate algorithm convergence.
AB - In dynamic optimization problems (DOPs), as the environment changes through time, the optima also dynamically change. How to adapt to the dynamic environment and quickly find the optima in all environments is a challenging issue in solving DOPs. Usually, a new environment is strongly relevant to its previous environment. If we know how it changes from the previous environment to the new one, then we can transfer the information of the previous environment, e.g., past solutions, to get new promising information of the new environment, e.g., new high-quality solutions. Thus, in this paper, we propose a neural network (NN)-based information transfer method, named NNIT, to learn the transfer model of environment changes by NN and then use the learned model to reuse the past solutions. When the environment changes, NNIT first collects the solutions from both the previous environment and the new environment and then uses an NN to learn the transfer model from these solutions. After that, the NN is used to transfer the past solutions to new promising solutions for assisting the optimization in the new environment. The proposed NNIT can be incorporated into population-based evolutionary algorithms (EAs) to solve DOPs. Several typical state-of-the-art EAs for DOPs are selected for comprehensive study and evaluated using the widely used moving peaks benchmark. The experimental results show that the proposed NNIT is promising and can accelerate algorithm convergence.
KW - Dynamic optimization problem (DOP)
KW - information transfer
KW - neural network (NN)
UR - http://www.scopus.com/inward/record.url?scp=85075256695&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2019.2920887
DO - 10.1109/TNNLS.2019.2920887
M3 - Journal article
C2 - 31329131
AN - SCOPUS:85075256695
SN - 2162-237X
VL - 31
SP - 1557
EP - 1570
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 5
M1 - 8767010
ER -