Deep-Reinforcement-Learning-based Adaptive State-Feedback Control for Inter-Area Oscillation Damping with Continuous Eigenvalue Configurations

Published in CSEE Journal of Power and Energy Systems (CSEE JPES), Paper, 2025

Recommended citation: S.Y. Liang, L. Huo, W.Y. Qin, X. Chen, and P.Y. Sun, "Deep-Reinforcement-Learning-based Adaptive State-Feedback Control for Inter-Area Oscillation Damping with Continuous Eigenvalue Configurations," CSEE Journal of Power and Energy Systems (CSEE JPES), 2025.

Controlling inter-area oscillation (IAO) across wide areas is crucial for the stability of modern power systems. Recent advances in deep learning, combined with the extensive deployment of phasor measurement units (PMUs) and generator sensors, have catalyzed the development of data-driven IAO damping controllers. In this paper, a novel IAO damping control framework is presented by modeling the control problem as a Markov Decision Process (MDP) and solving it through deep reinforcement learning (DRL). The DRL-based controller is trained in the state space with continuous eigenvalue configurations. To optimize control performance and cost-efficiency, only a subset of generators, identified by global participation factors, are selected for control. In addition, a switching control strategy (SCS) is introduced that effectively integrates the DRL-based controller with power system stabilizers (PSSs) to enhance overall performance. The simulation results on the IEEE 39-bus New England power system show that the proposed method outperforms two benchmark methods regarding the transient response. The DRL-based controller trained on the linear state-space environment can be directly tested in the nonlinear differential algebraic environment. The robustness of the proposed method against communication delays has been thoroughly investigated.