Abstract
The NERC's control performance standard (CPS) based automatic generation control (AGC) problem is a stochastic multistage decision problem, which can be suitably modeled as a reinforcement learning (RL) problem based on Markov decision process (MDP) theory. The paper chose the Q-learning method as the RL algorithm regarding the CPS values as the rewards from the interconnected power systems. By regulating a closed-loop CPS control rule to maximize the total reward in the procedure of on-line learning, the optimal CPS control strategy can be gradually obtained. An applicable semi-supervisory pre-learning method was introduced to enhance the stability and convergence ability of Q-learning controllers. Two cases show that the proposed controllers can obviously enhance the robustness and adaptability of AGC systems while the CPS compliances are ensured. Soc. for Elec. Eng.
| Original language | Chinese (Simplified) |
|---|---|
| Pages (from-to) | 13-19 |
| Number of pages | 7 |
| Journal | Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering |
| Volume | 29 |
| Issue number | 19 |
| Publication status | Published - 5 Jul 2009 |
Keywords
- Automatic generation control
- Control performance standard
- Markov decision process
- Optimal control
- Q-learning
ASJC Scopus subject areas
- Electrical and Electronic Engineering
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver