Operation of Distributed Battery Considering Demand Response Using Deep Reinforcement Learning in Grid Edge Control
Battery energy storage systems (BESSs) are able to facilitate economical operation of the grid through demand response (DR), and are regarded as the most significant DR resource. Among them, distributed BESS integrating home photovoltaics (PV) have developed rapidly, and account for nearly 40% of ne...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/445e25fbd8364979ad3639d27eb2c7de |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Battery energy storage systems (BESSs) are able to facilitate economical operation of the grid through demand response (DR), and are regarded as the most significant DR resource. Among them, distributed BESS integrating home photovoltaics (PV) have developed rapidly, and account for nearly 40% of newly installed capacity. However, the use scenarios and use efficiency of distributed BESS are far from sufficient to be able to utilize the potential loads and overcome uncertainties caused by disorderly operation. In this paper, the low-voltage transformer-powered area (LVTPA) is firstly defined, and then a DR grid edge controller was implemented based on deep reinforcement learning to maximize the total DR benefits and promote three-phase balance in the LVTPA. The proposed DR problem is formulated as a Markov decision process (MDP). In addition, the deep deterministic policy gradient (DDPG) algorithm is applied to train the controller in order to learn the optimal DR strategy. Additionally, a life cycle cost model of the BESS is established and implemented in the DR scheme to measure the income. The numerical results, compared to deep Q learning and model-based methods, demonstrate the effectiveness and validity of the proposed method. |
---|