• Title/Summary/Keyword: Multi-Agent Reinforcement Learning

Search Result 62, Processing Time 0.015 seconds

Improvements of pursuit performance using episodic parameter optimization in probabilistic games (에피소드 매개변수 최적화를 이용한 확률게임에서의 추적정책 성능 향상)

  • Kwak, Dong-Jun;Kim, H.-Jin
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.3
    • /
    • pp.215-221
    • /
    • 2012
  • In this paper, we introduce an optimization method to improve pursuit performance of a pursuer in a pursuit-evasion game (PEG). Pursuers build a probability map and employ a hybrid pursuit policy which combines the merits of local-max and global-max pursuit policies to search and capture evaders as soon as possible in a 2-dimensional space. We propose an episodic parameter optimization (EPO) algorithm to learn good values for the weighting parameters of a hybrid pursuit policy. The EPO algorithm is performed while many episodes of the PEG are run repeatedly and the reward of each episode is accumulated using reinforcement learning, and the candidate weighting parameter is selected in a way that maximizes the total averaged reward by using the golden section search method. We found the best pursuit policy in various situations which are the different number of evaders and the different size of spaces and analyzed results.

Collision Avoidance Path Control of Multi-AGV Using Multi-Agent Reinforcement Learning (다중 에이전트 강화학습을 이용한 다중 AGV의 충돌 회피 경로 제어)

  • Choi, Ho-Bin;Kim, Ju-Bong;Han, Youn-Hee;Oh, Se-Won;Kim, Kwi-Hoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.281-288
    • /
    • 2022
  • AGVs are often used in industrial applications to transport heavy materials around a large industrial building, such as factories or warehouses. In particular, in fulfillment centers their usefulness is maximized for automation. To increase productivity in warehouses such as fulfillment centers, sophisticated path planning of AGVs is required. We propose a scheme that can be applied to QMIX, a popular cooperative MARL algorithm. The performance was measured with three metrics in several fulfillment center layouts, and the results are presented through comparison with the performance of the existing QMIX. Additionally, we visualize the transport paths of trained AGVs for a visible analysis of the behavior patterns of the AGVs as heat maps.