• 제목/요약/키워드: Q-Learning algorithm

검색결과 152건 처리시간 0.029초

Avoidance Behavior of Small Mobile Robots based on the Successive Q-Learning

  • Kim, Min-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.164.1-164
    • /
    • 2001
  • Q-learning is a recent reinforcement learning algorithm that does not need a modeling of environment and it is a suitable approach to learn behaviors for autonomous agents. But when it is applied to multi-agent learning with many I/O states, it is usually too complex and slow. To overcome this problem in the multi-agent learning system, we propose the successive Q-learning algorithm. Successive Q-learning algorithm divides state-action pairs, which agents can have, into several Q-functions, so it can reduce complexity and calculation amounts. This algorithm is suitable for multi-agent learning in a dynamically changing environment. The proposed successive Q-learning algorithm is applied to the prey-predator problem with the one-prey and two-predators, and its effectiveness is verified from the efficient avoidance ability of the prey agent.

  • PDF

Self-Imitation Learning을 이용한 개선된 Deep Q-Network 알고리즘 (Improved Deep Q-Network Algorithm Using Self-Imitation Learning)

  • 선우영민;이원창
    • 전기전자학회논문지
    • /
    • 제25권4호
    • /
    • pp.644-649
    • /
    • 2021
  • Self-Imitation Learning은 간단한 비활성 정책 actor-critic 알고리즘으로써 에이전트가 과거의 좋은 경험을 활용하여 최적의 정책을 찾을 수 있도록 해준다. 그리고 actor-critic 구조를 갖는 강화학습 알고리즘에 결합되어 다양한 환경들에서 알고리즘의 상당한 개선을 보여주었다. 하지만 Self-Imitation Learning이 강화학습에 큰 도움을 준다고 하더라도 그 적용 분야는 actor-critic architecture를 가지는 강화학습 알고리즘으로 제한되어 있다. 본 논문에서 Self-Imitation Learning의 알고리즘을 가치 기반 강화학습 알고리즘인 DQN에 적용하는 방법을 제안하고, Self-Imitation Learning이 적용된 DQN 알고리즘의 학습을 다양한 환경에서 진행한다. 아울러 그 결과를 기존의 결과와 비교함으로써 Self-Imitation Leaning이 DQN에도 적용될 수 있으며 DQN의 성능을 개선할 수 있음을 보인다.

Q-learning 알고리즘이 성능 향상을 위한 CEE(CrossEntropyError)적용 (Applying CEE (CrossEntropyError) to improve performance of Q-Learning algorithm)

  • 강현구;서동성;이병석;강민수
    • 한국인공지능학회지
    • /
    • 제5권1호
    • /
    • pp.1-9
    • /
    • 2017
  • Recently, the Q-Learning algorithm, which is one kind of reinforcement learning, is mainly used to implement artificial intelligence system in combination with deep learning. Many research is going on to improve the performance of Q-Learning. Therefore, purpose of theory try to improve the performance of Q-Learning algorithm. This Theory apply Cross Entropy Error to the loss function of Q-Learning algorithm. Since the mean squared error used in Q-Learning is difficult to measure the exact error rate, the Cross Entropy Error, known to be highly accurate, is applied to the loss function. Experimental results show that the success rate of the Mean Squared Error used in the existing reinforcement learning was about 12% and the Cross Entropy Error used in the deep learning was about 36%. The success rate was shown.

Q-value Initialization을 이용한 Reinforcement Learning Speedup Method (Reinforcement learning Speedup method using Q-value Initialization)

  • 최정환
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(3)
    • /
    • pp.13-16
    • /
    • 2001
  • In reinforcement teaming, Q-learning converges quite slowly to a good policy. Its because searching for the goal state takes very long time in a large stochastic domain. So I propose the speedup method using the Q-value initialization for model-free reinforcement learning. In the speedup method, it learns a naive model of a domain and makes boundaries around the goal state. By using these boundaries, it assigns the initial Q-values to the state-action pairs and does Q-learning with the initial Q-values. The initial Q-values guide the agent to the goal state in the early states of learning, so that Q-teaming updates Q-values efficiently. Therefore it saves exploration time to search for the goal state and has better performance than Q-learning. 1 present Speedup Q-learning algorithm to implement the speedup method. This algorithm is evaluated. in a grid-world domain and compared to Q-teaming.

  • PDF

에이전트 학습 속도 향상을 위한 Q-Learning 정책 설계 (Q-Learning Policy Design to Speed Up Agent Training)

  • 용성중;박효경;유연휘;문일영
    • 실천공학교육논문지
    • /
    • 제14권1호
    • /
    • pp.219-224
    • /
    • 2022
  • 강화학습의 기본적인 알고리즘으로 많이 사용되고 있는 Q-Learning은 현재 상태에서 취할 수 있는 행동의 보상 중 가장 큰 값을 선택하는 Greedy action을 통해 보상을 최대화하는 방향으로 에이전트를 학습시키는 기법이다. 본 논문에서는 Frozen Lake 8*8 그리드 환경에서 Q-Learning을 사용하여 에이전트의 학습 속도를 높일 수 있는 정책에 관하여 연구하였다. 또한, Q-learning 의 기존 알고리즘과 에이전트의 행동에 '방향성'이라는 속성을 부여한 알고리즘의 학습 결과 비교를 진행하였다. 결과적으로, 본 논문에서 제안한 Q-Learning 정책이 통상적인 알고리즘보다 정확도와 학습 속도 모두 크게 높일 수 있는 것을 분석되었다.

Object tracking algorithm of Swarm Robot System for using Polygon based Q-learning and parallel SVM

  • Seo, Snag-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권3호
    • /
    • pp.220-224
    • /
    • 2008
  • This paper presents the polygon-based Q-leaning and Parallel SVM algorithm for object search with multiple robots. We organized an experimental environment with one hundred mobile robots, two hundred obstacles, and ten objects. Then we sent the robots to a hallway, where some obstacles were lying about, to search for a hidden object. In experiment, we used four different control methods: a random search, a fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process to determine the next action of the robots, and hexagon-based Q-learning, and dodecagon-based Q-learning and parallel SVM algorithm to enhance the fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process. In this paper, the result show that dodecagon-based Q-learning and parallel SVM algorithm is better than the other algorithm to tracking for object.

대기행렬이론과 Q-러닝 알고리즘을 적용한 지역문화축제 진입차량 주차분산 시뮬레이션 시스템 (A Simulation of Vehicle Parking Distribution System for Local Cultural Festival with Queuing Theory and Q-Learning Algorithm)

  • 조영호;서영건;정대율
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제29권2호
    • /
    • pp.131-147
    • /
    • 2020
  • Purpose The purpose of this study is to develop intelligent vehicle parking distribution system based on LoRa network at the circumstance of traffic congestion during cultural festival in a local city. This paper proposes a parking dispatch and distribution system using a Q-learning algorithm to rapidly disperse traffics that increases suddenly because of in-bound traffics from the outside of a city in the real-time base as well as to increase parking probability in a parking lot which is widely located in a city. Design/methodology/approach The system get information on realtime-base from the sensor network of IoT (LoRa network). It will contribute to solve the sudden increase in traffic and parking bottlenecks during local cultural festival. We applied the simulation system with Queuing model to the Yudeung Festival in Jinju, Korea. We proposed a Q-learning algorithm that could change the learning policy by setting the acceptability value of each parking lot as a threshold from the Jinju highway IC (Interchange) to the 7 parking lots. LoRa Network platform supports to browse parking resource information to each vehicle in realtime. The system updates Q-table periodically using Q-learning algorithm as soon as get information from parking lots. The Queuing Theory with Poisson arrival distribution is used to get probability distribution function. The Dijkstra algorithm is used to find the shortest distance. Findings This paper suggest a simulation test to verify the efficiency of Q-learning algorithm at the circumstance of high traffic jam in a city during local festival. As a result of the simulation, the proposed algorithm performed well even when each parking lot was somewhat saturated. When an intelligent learning system such as an O-learning algorithm is applied, it is possible to more effectively distribute the vehicle to a lot with a high parking probability when the vehicle inflow from the outside rapidly increases at a specific time, such as a local city cultural festival.

강화학습의 Q-learning을 위한 함수근사 방법 (A Function Approximation Method for Q-learning of Reinforcement Learning)

  • 이영아;정태충
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권11호
    • /
    • pp.1431-1438
    • /
    • 2004
  • 강화학습(reinforcement learning)은 온라인으로 환경(environment)과 상호작용 하는 과정을 통하여 목표를 이루기 위한 전략을 학습한다. 강화학습의 기본적인 알고리즘인 Q-learning의 학습 속도를 가속하기 위해서, 거대한 상태공간 문제(curse of dimensionality)를 해결할 수 있고 강화학습의 특성에 적합한 함수 근사 방법이 필요하다. 본 논문에서는 이러한 문제점들을 개선하기 위해서, 온라인 퍼지 클러스터링(online fuzzy clustering)을 기반으로 한 Fuzzy Q-Map을 제안한다. Fuzzy Q-Map은 온라인 학습이 가능하고 환경의 불확실성을 표현할 수 있는 강화학습에 적합한 함수근사방법이다. Fuzzy Q-Map을 마운틴 카 문제에 적용하여 보았고, 학습 초기에 학습 속도가 가속됨을 보였다.

Multi-regional Anti-jamming Communication Scheme Based on Transfer Learning and Q Learning

  • Han, Chen;Niu, Yingtao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권7호
    • /
    • pp.3333-3350
    • /
    • 2019
  • The smart jammer launches jamming attacks which degrade the transmission reliability. In this paper, smart jamming attacks based on the communication probability over different channels is considered, and an anti-jamming Q learning algorithm (AQLA) is developed to obtain anti-jamming knowledge for the local region. To accelerate the learning process across multiple regions, a multi-regional intelligent anti-jamming learning algorithm (MIALA) which utilizes transferred knowledge from neighboring regions is proposed. The MIALA algorithm is evaluated through simulations, and the results show that the it is capable of learning the jamming rules and effectively speed up the learning rate of the whole communication region when the jamming rules are similar in the neighboring regions.

멀티-스텝 누적 보상을 활용한 Max-Mean N-Step 시간차 학습 (Max-Mean N-step Temporal-Difference Learning Using Multi-Step Return)

  • 황규영;김주봉;허주성;한연희
    • 정보처리학회논문지:컴퓨터 및 통신 시스템
    • /
    • 제10권5호
    • /
    • pp.155-162
    • /
    • 2021
  • n-스텝 시간차 학습은 몬테카를로 방법과 1-스텝 시간차 학습을 결합한 것으로, 적절한 n을 선택할 경우 몬테카를로 방법과 1-스텝 시간차 학습보다 성능이 좋은 알고리즘으로 알려져 있지만 최적의 n을 선택하는 것에 어려움이 있다. n-스텝 시간차 학습에서 n값 선택의 어려움을 해소하기 위해, 본 논문에서는 Q의 과대평가가 초기 학습의 성능을 높일 수 있다는 특징과 Q ≈ Q* 경우, 모든 n-스텝 누적 보상이 비슷한 값을 가진다는 성질을 이용하여 1 ≤ k ≤ n에 대한 모든 k-스텝 누적 보상의 최댓값과 평균으로 구성된 새로운 학습 타겟인 Ω-return을 제안한다. 마지막으로 OpenAI Gym의 Atari 게임 환경에서 n-스텝 시간차 학습과의 성능 비교 평가를 진행하여 본 논문에서 제안하는 알고리즘이 n-스텝 시간차 학습 알고리즘보다 성능이 우수하다는 것을 입증한다.