DOI QR코드

DOI QR Code

SAC 강화 학습을 통한 스마트 그리드 효율성 향상: CityLearn 환경에서 재생 에너지 통합 및 최적 수요 반응

Enhancing Smart Grid Efficiency through SAC Reinforcement Learning: Renewable Energy Integration and Optimal Demand Response in the CityLearn Environment

  • 투고 : 2023.12.29
  • 심사 : 2024.02.17
  • 발행 : 2024.02.29

초록

수요 반응은 전력망의 신뢰성을 높이고 비용을 최소화하기 위해 수요가 가장 많은 시간대에 고객이 소비패턴을 조정하도록 유도한다. 재생 에너지원을 스마트 그리드에 통합하는 것은 간헐적이고 예측할 수 없는 특성으로 인해 상당한 도전 과제를 안고 있다. 강화 학습 기법과 결합된 수요 대응 전략은 이러한 문제를 해결하고 기존 방식에서는 이러한 종류의 복잡한 요구 사항을 충족하지 못하는 경우 그리드 운영을 최적화할 수 있는 접근 방식으로 부상하고 있다. 본 연구는 재생 에너지 통합을 위한 수요 반응에 강화 학습 알고리즘을 적용하는 방법을 찾아 적용하는데 중점을 둔다. 연구의 핵심 목표는 수요 측 유연성을 최적화하고 재생 에너지 활용도를 개선할 뿐 아니라 그리드 안정성을 강화하고자 한다. 연구 결과는 강화 학습을 기반으로 한 수요 반응 전략이 그리드 유연성을 향상시키고 재생 에너지 통합을 촉진하는 데 효과적이라것을 보여준다.

Demand response is a strategy that encourages customers to adjust their consumption patterns at times of peak demand with the aim to improve the reliability of the power grid and minimize expenses. The integration of renewable energy sources into smart grids poses significant challenges due to their intermittent and unpredictable nature. Demand response strategies, coupled with reinforcement learning techniques, have emerged as promising approaches to address these challenges and optimize grid operations where traditional methods fail to meet such kind of complex requirements. This research focuses on investigating the application of reinforcement learning algorithms in demand response for renewable energy integration. The objectives include optimizing demand-side flexibility, improving renewable energy utilization, and enhancing grid stability. The results emphasize the effectiveness of demand response strategies based on reinforcement learning in enhancing grid flexibility and facilitating the integration of renewable energy.

키워드

과제정보

This research was supported by "Regional Innovation Strategy (RIS)" through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (MOE)(2021RIS-002)

참고문헌

  1. P. Siano, "Demand response and smart grids: A survey," Renew Sustain Energy Rev., vol. 30 no. 37, 2014, pp. 461-478. https://doi.org/10.1016/j.rser.2013.10.022
  2. J. Joo and J. Oh, "Efficient Grid-Independent ESS control System by Prediction of Energy Production Consumption," J. of the Korea Institute of Electronic Communication Sciences, vol. 14 no. 1, 2020, pp.155-160.
  3. E. Kwak and C. Moon, "Analysis of Power System Stability by Deployment of Renewable Energy Resources," J. of the Korea Institute of Electronic Communication Sciences, vol. 16 no. 4, 2021, pp.633-642.
  4. Z. Wang and T. Hong, "Reinforcement learning for building controls: The opportunities and challenges," Appl Energy, vol.. 269, 2020. pp.115036
  5. Yang, Shiyu, H. Oliver Gao, and Fengqi You. "Model predictive control for Demand- and MarketResponsive building energy management by leveraging active latent heat storage," Appl Energy, vol. 327, 2022. pp.120054
  6. D. Mariano-Hern'andez, L. Hernandez-Callejo, A. Zorita-Lamadrid, O. Duque-P'erez, and F. Santos Garcia, "A review of strategies for building energy management system: Model predictive control, demand side management, optimization, and fault detect & diagnosis," Journal of Building Engineering, vol. 33, 2021. pp.101692
  7. C, Henggeler Antunes, M. J. Alves, and I. Soares, "A comprehensive and modular set of appliance operation MILP models for demand response optimization," Advances Applied Energy, vol. 320, 2022. pp.119142
  8. J. R. Vazquez-Canteli and Z. Nagy, "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Advances Applied Energy, vol. 235 no. 1, 2019, pp. 1072-1089. https://doi.org/10.1016/j.apenergy.2018.11.002
  9. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. klimov, "Proximal policy optimization algorithms," arXiv preprint, vol. 1707, 2017. pp.06347
  10. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, D. Wiersta, "Continuous control with deep reinforcement learning," in Int. Conf. on Learning Representations (ICLR), San Juan City, PR, USA, 2016, pp. 1-14.
  11. S. Fujimoto, H. V. Hoof, and D. Meger, "Addressing function approximation error in actor-critic methods," in Proc. of the 35nd Int. Conf. on Machine Learning (ICML), Stockholm City, Sweden, 2018, pp. 1587-1596.
  12. A. Ajagekar and F. You, "Deep reinforcement learning based unit commitment scheduling under load and wind power uncertainty," IEEE Trans Sustain Energy, vol. 14 no. 2, 2023, pp. 790-803.
  13. R. Lu and Sh. Hong, "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Advances Applied Energy, vol. 236 no. 8, 2019, pp. 937-949. https://doi.org/10.1016/j.apenergy.2018.12.061
  14. R. Jin, Y. Zhou, C. Lu, and J. Song, "Deep reinforcement learning-based strategy for charging station participating in demand response," Advances Applied Energy, Vol328. 2022, pp.120140
  15. X. Kong, D. Kong, J. Yao, L. Bai, and J. Xiao. "Online pricing of demand response based on long short-term memory and reinforcement learning," Advances Applied Energy, vol. 328, 2022. pp.120140
  16. J. V' azquez-Canteli, S. Dey, G. Henze, and Z. Nagy, "CityLearn: Standardizing Research in Multi-Agent Reinforcement Learning for Demand Response and Urban Energy Management," arXiv preprint, vol 2012, 2020. pp.10504
  17. S. Jung, C. Sim, S. Park, and J. Kim, "A Novel of solar Heat collection Device Prototype using Parabolic based on Solar Light Tracking," J. of the Korea Institute of Electronic Communication Sciences, vol. 11 no. 3, 2018, pp. 411-420.
  18. M. Kang, "Renewable Energy Generation Prediction Model using Meterological Big Data," J. of the Korea Institute of Electronic Communication Sciences, vol. 18 no. 1, 2023, pp. 39-44.