• Title/Summary/Keyword: actor-critic

Search Result 48, Processing Time 0.028 seconds

Kernel-based actor-critic approach with applications

  • Chu, Baek-Suk;Jung, Keun-Woo;Park, Joo-Young
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.267-274
    • /
    • 2011
  • Recently, actor-critic methods have drawn significant interests in the area of reinforcement learning, and several algorithms have been studied along the line of the actor-critic strategy. In this paper, we consider a new type of actor-critic algorithms employing the kernel methods, which have recently shown to be very effective tools in the various fields of machine learning, and have performed investigations on combining the actor-critic strategy together with kernel methods. More specifically, this paper studies actor-critic algorithms utilizing the kernel-based least-squares estimation and policy gradient, and in its critic's part, the study uses a sliding-window-based kernel least-squares method, which leads to a fast and efficient value-function-estimation in a nonparametric setting. The applicability of the considered algorithms is illustrated via a robot locomotion problem and a tunnel ventilation control problem.

Robot Locomotion via RLS-based Actor-Critic Learning (RLS 기반 Actor-Critic 학습을 이용한 로봇이동)

  • Kim, Jong-Ho;Kang, Dae-Sung;Park, Joo-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.234-237
    • /
    • 2005
  • 강화학습을 위한 많은 방법 중 정책 반복을 이용한 actor-critic 학습 방법이 많은 적용 사례를 통해서 그 가능성을 인정받고 있다. Actor-critic 학습 방법은 제어입력 선택 전략을 위한 actor 학습과 가치 함수 근사를 위한 critic 학습이 필요하다. 본 논문은 critic의 학습을 위해 빠른 수렴성을 보장하는 RLS(recursive least square)를 사용하고, actor의 학습을 위해 정책의 기울기(policy gradient)를 이용하는 새로운 알고리즘을 제안하였다. 그리고 이를 실험적으로 확인하여 제안한 논문의 성능을 확인해 보았다.

  • PDF

Actor-Critic Algorithm with Transition Cost Estimation

  • Sergey, Denisov;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.270-275
    • /
    • 2016
  • We present an approach for acceleration actor-critic algorithm for reinforcement learning with continuous action space. Actor-critic algorithm has already proved its robustness to the infinitely large action spaces in various high dimensional environments. Despite that success, the main problem of the actor-critic algorithm remains the same-speed of convergence to the optimal policy. In high dimensional state and action space, a searching for the correct action in each state takes enormously long time. Therefore, in this paper we suggest a search accelerating function that allows to leverage speed of algorithm convergence and reach optimal policy faster. In our method, we assume that actions may have their own distribution of preference, that independent on the state. Since in the beginning of learning agent act randomly in the environment, it would be more efficient if actions were taken according to the some heuristic function. We demonstrate that heuristically-accelerated actor-critic algorithm learns optimal policy faster, using Educational Process Mining dataset with records of students' course learning process and their grades.

Improved Deep Q-Network Algorithm Using Self-Imitation Learning (Self-Imitation Learning을 이용한 개선된 Deep Q-Network 알고리즘)

  • Sunwoo, Yung-Min;Lee, Won-Chang
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.644-649
    • /
    • 2021
  • Self-Imitation Learning is a simple off-policy actor-critic algorithm that makes an agent find an optimal policy by using past good experiences. In case that Self-Imitation Learning is combined with reinforcement learning algorithms that have actor-critic architecture, it shows performance improvement in various game environments. However, its applications are limited to reinforcement learning algorithms that have actor-critic architecture. In this paper, we propose a method of applying Self-Imitation Learning to Deep Q-Network which is a value-based deep reinforcement learning algorithm and train it in various game environments. We also show that Self-Imitation Learning can be applied to Deep Q-Network to improve the performance of Deep Q-Network by comparing the proposed algorithm and ordinary Deep Q-Network training results.

Robot Locomotion via RLS-based Actor-Critic Learning (RLS 기반 Actor-Critic 학습을 이용한 로봇이동)

  • Kim, Jong-Ho;Kang, Dae-Sung;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.7
    • /
    • pp.893-898
    • /
    • 2005
  • Due to the merits that only a small amount of computation is needed for solutions and stochastic policies can be handled explicitly, the actor-critic algorithm, which is a class of reinforcement learning methods, has recently attracted a lot of interests in the area of artificial intelligence. The actor-critic network composes of tile actor network for selecting control inputs and the critic network for estimating value functions, and in its training stage, the actor and critic networks take the strategy, of changing their parameters adaptively in order to select excellent control inputs and yield accurate approximation for value functions as fast as possible. In this paper, we consider a new actor-critic algorithm employing an RLS(Recursive Least Square) method for critic learning, and policy gradients for actor learning. The applicability of the considered algorithm is illustrated with experiments on the two linked robot arm.

Control of Crawling Robot using Actor-Critic Fuzzy Reinforcement Learning (액터-크리틱 퍼지 강화학습을 이용한 기는 로봇의 제어)

  • Moon, Young-Joon;Lee, Jae-Hoon;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.519-524
    • /
    • 2009
  • Recently, reinforcement learning methods have drawn much interests in the area of machine learning. Dominant approaches in researches for the reinforcement learning include the value-function approach, the policy search approach, and the actor-critic approach, among which pertinent to this paper are algorithms studied for problems with continuous states and continuous actions along the line of the actor-critic strategy. In particular, this paper focuses on presenting a method combining the so-called ACFRL(actor-critic fuzzy reinforcement learning), which is an actor-critic type reinforcement learning based on fuzzy theory, together with the RLS-NAC which is based on the RLS filters and natural actor-critic methods. The presented method is applied to a control problem for crawling robots, and some results are reported from comparison of learning performance.

Trading Strategy Using RLS-Based Natural Actor-Critic algorithm (RLS기반 Natural Actor-Critic 알고리즘을 이용한 트레이딩 전략)

  • Kang Daesung;Kim Jongho;Park Jooyoung;Park Kyung-Wook
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.238-241
    • /
    • 2005
  • 최근 컴퓨터를 이용하여 효과적인 트레이드를 하려는 투자자들이 늘고 있다. 본 논문에서는 많은 인공지능 방법론 중에서 강화학습(reinforcement learning)을 이용하여 효과적으로 트레이딩하는 방법에 대해서 다루려한다. 특히 강화학습 중에서 natural policy gradient를 이용하여 actor의 파라미터를 업데이트하고, value function을 효과적으로 추정하기 위해 RLS(recursive least-squares) 기법으로 critic 부분을 업데이트하는 RLS 기반 natural actor-critic 알고리즘을 이용하여 트레이딩을 수행하는 전략에 대한 가능성을 살펴 보기로 한다.

  • PDF

Adaptive Actor-Critic Learning of Mobile Robots Using Actual and Simulated Experiences

  • Rafiuddin Syam;Keigo Watanabe;Kiyotaka Izumi;Kazuo Kiguchi;Jin, Sang-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.43.6-43
    • /
    • 2001
  • In this paper, we describe an actor-critic method as a kind of temporal difference (TD) algorithms. The value function is regarded as a current estimator, in which two value functions have different inputs: one is an actual experience; the other is a simulated experience obtained through a predictive model. Thus, the parameter´s updating for the actor and critic parts is based on actual and simulated experiences, where the critic is constructed by a radial-basis function neural network (RBFNN) and the actor is composed of a kinematic-based controller. As an example application of the present method, a tracking control problem for the position coordinates and azimuth of a nonholonomic mobile robot is considered. The effectiveness is illustrated by a simulation.

  • PDF

Roll control of Underwater Vehicle based Reinforcement Learning using Advantage Actor-Critic (Advantage Actor-Critic 강화학습 기반 수중운동체의 롤 제어)

  • Lee, Byungjun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.1
    • /
    • pp.123-132
    • /
    • 2021
  • In order for the underwater vehicle to perform various tasks, it is important to control the depth, course, and roll of the underwater vehicle. To design such a controller, it is necessary to construct a dynamic model of the underwater vehicle and select the appropriate hydrodynamic coefficients. For the controller design, since the dynamic model is linearized assuming a limited operating range, the control performance in the steady state is well satisfied, but the control performance in the transient state may be unstable. In this paper, in order to overcome the problems of the existing controller design, we propose a A2C(Advantage Actor-Critic) based roll controller for underwater vehicle with stable learning performance in a continuous space among reinforcement learning methods that can be learned through rewards for actions. The performance of the proposed A2C based roll controller is verified through simulation and compared with PID and Dueling DDQN based roll controllers.

Performance Comparison of Crawling Robots Trained by Reinforcement Learning Methods (강화학습에 의해 학습된 기는 로봇의 성능 비교)

  • Park, Ju-Yeong;Jeong, Gyu-Baek;Mun, Yeong-Jun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.33-36
    • /
    • 2007
  • 최근에 인공지능 분야에서는, 국내외적으로 강화학습(reinforcement learning)에 관한 관심이 크게 증폭되고 있다. 강화학습의 최근 경향을 살펴보면, 크게 가치함수를 직접 활용하는 방법(value function-based methods), 제어 전략에 대한 탐색을 활용하는 방법(policy search methods), 그리고 액터-크리틱 방법(actor-critic methods)의 세가지 방향으로 발전하고 있음을 알 수 있다. 본 논문에서는 이중 세 번째 부류인 액터-크리틱 방법 중 NAC(natural actor-critic) 기법의 한 종류인 RLS-NAC(recursive least-squares based natural actor-critic) 알고리즘을 다양한 트레이스 감쇠계수를 사용하여 연속제어입력(real-valued control inputs)으로 제어되는 Kimura의 기는 로봇에 대해 적용해보고, 그 성능을 기존의 SGA(stochastic gradient ascent) 알고리즘을 이용하여 학습한 경우와 비교해보도록 한다.

  • PDF