Stochastic optimal relaxed automatic generation control in non-Markov environment based on multi-step Q(λ) learning

Tao Yu, Bin Zhou, Ka Wing Chan, Liang Chen, Bo Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

109 Citations (Scopus)

Abstract

This paper proposes a stochastic optimal relaxed control methodology based on reinforcement learning (RL) for solving the automatic generation control (AGC) under NERC's control performance standards (CPS). The multi-step Q(λ) learning algorithm is introduced to effectively tackle the long time-delay control loop for AGC thermal plants in non-Markov environment. The moving averages of CPS1/ACE are adopted as the state feedback input, and the CPS control and relaxed control objectives are formulated as multi-criteria reward function via linear weighted aggregate method. This optimal AGC strategy provides a customized platform for interactive self-learning rules to maximize the long-run discounted reward. Statistical experiments show that the RL theory based Q(λ) controllers can effectively enhance the robustness and dynamic performance of AGC systems, and reduce the number of pulses and pulse reversals while the CPS compliances are ensured. The novel AGC scheme also provides a convenient way of controlling the degree of CPS compliance and relaxation by online tuning relaxation factors to implement the desirable relaxed control.
Original languageEnglish
Article number5706397
Pages (from-to)1272-1282
Number of pages11
JournalIEEE Transactions on Power Systems
Volume26
Issue number3
DOIs
Publication statusPublished - 1 Aug 2011

Cite this