• 제목/요약/키워드: Sparse Reward

검색결과 4건 처리시간 0.019초

Cooperative Multi-agent Reinforcement Learning on Sparse Reward Battlefield Environment using QMIX and RND in Ray RLlib

  • Minkyoung Kim
    • 한국컴퓨터정보학회논문지
    • /
    • 제29권1호
    • /
    • pp.11-19
    • /
    • 2024
  • 멀티에이전트는 전장 교전 상황, 무인 운송 차량 등 다양한 실제 협동 환경에 사용될 수 있다. 전장 교전 상황에서는 도메인 정보의 제한으로 즉각적인 보상(Dense Reward) 설계의 어려움이 있어 명백한 희소 보상(Sparse Reward)으로 학습되는 상황을 고려해야 한다. 본 논문에서는 전장 교전 상황에서의 아군 에이전트 간 협업 가능성을 확인하며, 희소 보상 환경인 Multi-Robot Warehouse Environment(RWARE)를 활용하여 유사한 문제와 평가 기준을 정의하고, 강화학습 라이브러리인 Ray RLlib의 QMIX 알고리즘을 사용하여 학습 환경을 구성한다. 정의한 문제에 대해 QMIX의 Agent Network를 개선하고 Random Network Distillation(RND)을 적용한다. 이를 통해 에이전트의 부분 관측값에 대한 패턴과 시간 특징을 추출하고, 에이전트의 내적 보상(Intrinsic Reward)을 통해 희소 보상 경험 획득 개선이 가능함을 실험을 통해 확인한다.

시연에 의해 유도된 탐험을 통한 시각 기반의 물체 조작 (Visual Object Manipulation Based on Exploration Guided by Demonstration)

  • 김두준;조현준;송재복
    • 로봇학회논문지
    • /
    • 제17권1호
    • /
    • pp.40-47
    • /
    • 2022
  • A reward function suitable for a task is required to manipulate objects through reinforcement learning. However, it is difficult to design the reward function if the ample information of the objects cannot be obtained. In this study, a demonstration-based object manipulation algorithm called stochastic exploration guided by demonstration (SEGD) is proposed to solve the design problem of the reward function. SEGD is a reinforcement learning algorithm in which a sparse reward explorer (SRE) and an interpolated policy using demonstration (IPD) are added to soft actor-critic (SAC). SRE ensures the training of the critic of SAC by collecting prior data and IPD limits the exploration space by making SEGD's action similar to the expert's action. Through these two algorithms, the SEGD can learn only with the sparse reward of the task without designing the reward function. In order to verify the SEGD, experiments were conducted for three tasks. SEGD showed its effectiveness by showing success rates of more than 96.5% in these experiments.

Reward Shaping for a Reinforcement Learning Method-Based Navigation Framework

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 추계학술대회
    • /
    • pp.9-11
    • /
    • 2022
  • Applying Reinforcement Learning in everyday applications and varied environments has proved the potential of the of the field and revealed pitfalls along the way. In robotics, a learning agent takes over gradually the control of a robot by abstracting the navigation model of the robot with its inputs and outputs, thus reducing the human intervention. The challenge for the agent is how to implement a feedback function that facilitates the learning process of an MDP problem in an environment while reducing the time of convergence for the method. In this paper we will implement a reward shaping system avoiding sparse rewards which gives fewer data for the learning agent in a ROS environment. Reward shaping prioritizes behaviours that brings the robot closer to the goal by giving intermediate rewards and helps the algorithm converge quickly. We will use a pseudocode implementation as an illustration of the method.

  • PDF

멀티 에이전트 강화학습 기술 동향 (A Survey on Recent Advances in Multi-Agent Reinforcement Learning)

  • 유병현;데브라니 데비;김현우;송화전;박경문;이성원
    • 전자통신동향분석
    • /
    • 제35권6호
    • /
    • pp.137-149
    • /
    • 2020
  • Several multi-agent reinforcement learning (MARL) algorithms have achieved overwhelming results in recent years. They have demonstrated their potential in solving complex problems in the field of real-time strategy online games, robotics, and autonomous vehicles. However these algorithms face many challenges when dealing with massive problem spaces in sparse reward environments. Based on the centralized training and decentralized execution (CTDE) architecture, the MARL algorithms discussed in the literature aim to solve the current challenges by formulating novel concepts of inter-agent modeling, credit assignment, multiagent communication, and the exploration-exploitation dilemma. The fundamental objective of this paper is to deliver a comprehensive survey of existing MARL algorithms based on the problem statements rather than on the technologies. We also discuss several experimental frameworks to provide insight into the use of these algorithms and to motivate some promising directions for future research.