Abstract
This paper presents the application of multi-step backtrack Q (λ) learning based on stochastic optimal control to effectively solve the long time-delay link for thermal plants under Non-Markovian environment. The moving averages of CPS1/CPS2 are used as the state input, and the CPS control and relaxed control objectives are formulated as MDP reward function by means of linear weighted aggregative approach. The optimal CPS control methodology open avenues to on-line feedback learning rule to maximize the long-run discounted reward. Statistic experiments show that the Q (λ) controllers can enhance obviously the robustness and dynamic performance of AGC systems, and reduce the number of pulses and pulse reversals while the CPS compliances are ensured. The proposed strategy also provides a convenient means for controlling the degree of compliance and relaxation by online tune relaxation factors to implement the desirable CPS relaxed control.
Original language | Chinese (Simplified) |
---|---|
Pages (from-to) | 179-186 |
Number of pages | 8 |
Journal | Diangong Jishu Xuebao/Transactions of China Electrotechnical Society |
Volume | 26 |
Issue number | 6 |
Publication status | Published - 1 Jun 2011 |
Keywords
- Automatic generation control
- Control performance standard (CPS)
- Multi-step Q (λ) learning
- Non-Markovian environment
- Stochastic optimal control
ASJC Scopus subject areas
- Electrical and Electronic Engineering