Abstract
This paper proposes a stochastic optimal relaxed control methodology based on reinforcement learning (RL) for solving the automatic generation control (AGC) under NERC's control performance standards (CPS). The multi-step Q(λ) learning algorithm is introduced to effectively tackle the long time-delay control loop for AGC thermal plants in non-Markov environment. The moving averages of CPS1/ACE are adopted as the state feedback input, and the CPS control and relaxed control objectives are formulated as multi-criteria reward function via linear weighted aggregate method. This optimal AGC strategy provides a customized platform for interactive self-learning rules to maximize the long-run discounted reward. Statistical experiments show that the RL theory based Q(λ) controllers can effectively enhance the robustness and dynamic performance of AGC systems, and reduce the number of pulses and pulse reversals while the CPS compliances are ensured. The novel AGC scheme also provides a convenient way of controlling the degree of CPS compliance and relaxation by online tuning relaxation factors to implement the desirable relaxed control.
Original language | English |
---|---|
Article number | 5706397 |
Pages (from-to) | 1272-1282 |
Number of pages | 11 |
Journal | IEEE Transactions on Power Systems |
Volume | 26 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1 Aug 2011 |