• Title/Summary/Keyword: Soft Actor Critic Algorithm

Search Result 2, Processing Time 0.016 seconds

Visual Object Manipulation Based on Exploration Guided by Demonstration (시연에 의해 유도된 탐험을 통한 시각 기반의 물체 조작)

  • Kim, Doo-Jun;Jo, HyunJun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.40-47
    • /
    • 2022
  • A reward function suitable for a task is required to manipulate objects through reinforcement learning. However, it is difficult to design the reward function if the ample information of the objects cannot be obtained. In this study, a demonstration-based object manipulation algorithm called stochastic exploration guided by demonstration (SEGD) is proposed to solve the design problem of the reward function. SEGD is a reinforcement learning algorithm in which a sparse reward explorer (SRE) and an interpolated policy using demonstration (IPD) are added to soft actor-critic (SAC). SRE ensures the training of the critic of SAC by collecting prior data and IPD limits the exploration space by making SEGD's action similar to the expert's action. Through these two algorithms, the SEGD can learn only with the sparse reward of the task without designing the reward function. In order to verify the SEGD, experiments were conducted for three tasks. SEGD showed its effectiveness by showing success rates of more than 96.5% in these experiments.

Enhancing Smart Grid Efficiency through SAC Reinforcement Learning: Renewable Energy Integration and Optimal Demand Response in the CityLearn Environment (SAC 강화 학습을 통한 스마트 그리드 효율성 향상: CityLearn 환경에서 재생 에너지 통합 및 최적 수요 반응)

  • Esanov Alibek Rustamovich;Seung Je Seong;Chang-Gyoon Lim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.93-104
    • /
    • 2024
  • Demand response is a strategy that encourages customers to adjust their consumption patterns at times of peak demand with the aim to improve the reliability of the power grid and minimize expenses. The integration of renewable energy sources into smart grids poses significant challenges due to their intermittent and unpredictable nature. Demand response strategies, coupled with reinforcement learning techniques, have emerged as promising approaches to address these challenges and optimize grid operations where traditional methods fail to meet such kind of complex requirements. This research focuses on investigating the application of reinforcement learning algorithms in demand response for renewable energy integration. The objectives include optimizing demand-side flexibility, improving renewable energy utilization, and enhancing grid stability. The results emphasize the effectiveness of demand response strategies based on reinforcement learning in enhancing grid flexibility and facilitating the integration of renewable energy.