TY - GEN
T1 - Which Surrogate Works for Empirical Performance Modelling? A Case Study with Differential Evolution
AU - Li, Ke
AU - Xiang, Zilin
AU - Tan, Kay Chen
N1 - Funding Information:
This work was supported by UKRI Future Leaders Fellowship under grant MR/S017062/1
Publisher Copyright:
© 2019 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - It is not uncommon that meta-heuristic algorithms contain some intrinsic parameters, the optimal configuration of which is crucial for achieving their peak performance. However, evaluating the effectiveness of a configuration is expensive, as it involves many costly runs of the target algorithm. Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters. Such surrogates constitute an important building block for understanding algorithm performance, algorithm portfolio/selection, and the automatic algorithm configuration. In principle, many off-the-shelf machine learning techniques can be used to build surrogates. In this paper, we take the differential evolution (DE) as the baseline algorithm for proof-of-concept study. Regression models are trained to model the DE's empirical performance given a parameter configuration. In particular, we evaluate and compare four popular regression algorithms both in terms of how well they predict the empirical performance with respect to a particular parameter configuration, and also how well they approximate the parameter versus the empirical performance landscapes.
AB - It is not uncommon that meta-heuristic algorithms contain some intrinsic parameters, the optimal configuration of which is crucial for achieving their peak performance. However, evaluating the effectiveness of a configuration is expensive, as it involves many costly runs of the target algorithm. Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters. Such surrogates constitute an important building block for understanding algorithm performance, algorithm portfolio/selection, and the automatic algorithm configuration. In principle, many off-the-shelf machine learning techniques can be used to build surrogates. In this paper, we take the differential evolution (DE) as the baseline algorithm for proof-of-concept study. Regression models are trained to model the DE's empirical performance given a parameter configuration. In particular, we evaluate and compare four popular regression algorithms both in terms of how well they predict the empirical performance with respect to a particular parameter configuration, and also how well they approximate the parameter versus the empirical performance landscapes.
KW - differential evolution
KW - Empirical performance modelling
KW - landscape analysis
KW - parameter configuration
UR - http://www.scopus.com/inward/record.url?scp=85071314448&partnerID=8YFLogxK
U2 - 10.1109/CEC.2019.8789984
DO - 10.1109/CEC.2019.8789984
M3 - Conference article published in proceeding or book
AN - SCOPUS:85071314448
T3 - 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings
SP - 1988
EP - 1995
BT - 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE Congress on Evolutionary Computation, CEC 2019
Y2 - 10 June 2019 through 13 June 2019
ER -