Journal of South China University of Technology(Natural Science Edition) ›› 2025, Vol. 53 ›› Issue (1): 1-9.doi: 10.12141/j.issn.1000-565X.240218

• Energy,Power & Electrical Engineering •     Next Articles

Distributed Energy Cluster Scheduling Method Based on EA-RL Algorithm

CHENG Xiaohua, WANG Zefu, ZENG Jun, ZENG Jingyao, TAN Haojie   

  1. School of Electric Power Engineering,South China University of Technology,Guangzhou 510640,Guangdong,China
  • Received:2024-05-06 Online:2025-01-25 Published:2025-01-02
  • About author:程小华(1963—),男,博士,教授,主要从事电机基本理论、新型电机、电机设计研究。E-mail:epxhc@scut.edu.cn
  • Supported by:
    the National Natural Science Foundation of China(62173148);the Natural Science Foundation of Guangdong Province(2022A1515010150);the Guangdong Basic and Applied Basic Research Foundation(2022A1515240026)

Abstract:

At present, the research on distributed energy cluster scheduling is mostly limited to a single scenario and lacks efficient and accurate algorithms. Aiming at these problems, this paper proposed a multi-scenario scheduling method for distributed energy clusters based on evolutionary algorithm experience-guided deep reinforcement learning (EA-RL). The individual models of power supply, energy storage and load in distributed energy cluster were established, respectively. Based on the individual scheduling model, a multi-scenario distributed energy cluster optimal scheduling model including auxiliary peak regulation and frequency modulation was established. Based on the framework of evolutionary reinforcement learning algorithm, an EA-RL algorithm was proposed. The algorithm combines genetic algorithm (GA) and deep deterministic policy gradient (DDPG) algorithm. The empirical sequence was used as the individual of genetic algorithm for crossover, mutation and selection. The high-quality experience was selected to join the DDPG algorithm experience pool to guide the training of the agent to improve the search efficiency and convergence of the algorithm. According to the multi-scenario scheduling model, the state space and action space of the multi-scenario scheduling problem of distributed energy cluster were constructed. Then, the reward function was constructed by minimizing the scheduling cost, the deviation of the auxiliary service scheduling instruction, the over-limit power of the tie line and the power difference between the source and the load, and the reinforcement learning model was established. To validate the effectiveness of the proposed algorithm and model, offline training of scheduling agents was conducted based on multi-scenario simulation cases, resulting in agents capable of adapting to various grid scenarios. Verification was carried out through online decision-making, and their scheduling decision-making capabilities were assessed based on decision outcomes. The validity of the algorithm was further verified through comparison with the DDPG algorithm. Finally, the trained agents undergo 60 consecutive days of online decision-making tests incorporating varying degrees of disturbances to validate their posterior effectiveness and robustness.

Key words: distributed energy cluster, deep reinforcement learning, evolutionary reinforcement learning algorithm, integrated scheduling for multiple scenarios

CLC Number: