• Title/Summary/Keyword: Q-Learning algorithm

Search Result 152, Processing Time 0.03 seconds

Avoidance Behavior of Small Mobile Robots based on the Successive Q-Learning

  • Kim, Min-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.164.1-164
    • /
    • 2001
  • Q-learning is a recent reinforcement learning algorithm that does not need a modeling of environment and it is a suitable approach to learn behaviors for autonomous agents. But when it is applied to multi-agent learning with many I/O states, it is usually too complex and slow. To overcome this problem in the multi-agent learning system, we propose the successive Q-learning algorithm. Successive Q-learning algorithm divides state-action pairs, which agents can have, into several Q-functions, so it can reduce complexity and calculation amounts. This algorithm is suitable for multi-agent learning in a dynamically changing environment. The proposed successive Q-learning algorithm is applied to the prey-predator problem with the one-prey and two-predators, and its effectiveness is verified from the efficient avoidance ability of the prey agent.

  • PDF

Improved Deep Q-Network Algorithm Using Self-Imitation Learning (Self-Imitation Learning을 이용한 개선된 Deep Q-Network 알고리즘)

  • Sunwoo, Yung-Min;Lee, Won-Chang
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.644-649
    • /
    • 2021
  • Self-Imitation Learning is a simple off-policy actor-critic algorithm that makes an agent find an optimal policy by using past good experiences. In case that Self-Imitation Learning is combined with reinforcement learning algorithms that have actor-critic architecture, it shows performance improvement in various game environments. However, its applications are limited to reinforcement learning algorithms that have actor-critic architecture. In this paper, we propose a method of applying Self-Imitation Learning to Deep Q-Network which is a value-based deep reinforcement learning algorithm and train it in various game environments. We also show that Self-Imitation Learning can be applied to Deep Q-Network to improve the performance of Deep Q-Network by comparing the proposed algorithm and ordinary Deep Q-Network training results.

Applying CEE (CrossEntropyError) to improve performance of Q-Learning algorithm (Q-learning 알고리즘이 성능 향상을 위한 CEE(CrossEntropyError)적용)

  • Kang, Hyun-Gu;Seo, Dong-Sung;Lee, Byeong-seok;Kang, Min-Soo
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.1
    • /
    • pp.1-9
    • /
    • 2017
  • Recently, the Q-Learning algorithm, which is one kind of reinforcement learning, is mainly used to implement artificial intelligence system in combination with deep learning. Many research is going on to improve the performance of Q-Learning. Therefore, purpose of theory try to improve the performance of Q-Learning algorithm. This Theory apply Cross Entropy Error to the loss function of Q-Learning algorithm. Since the mean squared error used in Q-Learning is difficult to measure the exact error rate, the Cross Entropy Error, known to be highly accurate, is applied to the loss function. Experimental results show that the success rate of the Mean Squared Error used in the existing reinforcement learning was about 12% and the Cross Entropy Error used in the deep learning was about 36%. The success rate was shown.

Reinforcement learning Speedup method using Q-value Initialization (Q-value Initialization을 이용한 Reinforcement Learning Speedup Method)

  • 최정환
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.13-16
    • /
    • 2001
  • In reinforcement teaming, Q-learning converges quite slowly to a good policy. Its because searching for the goal state takes very long time in a large stochastic domain. So I propose the speedup method using the Q-value initialization for model-free reinforcement learning. In the speedup method, it learns a naive model of a domain and makes boundaries around the goal state. By using these boundaries, it assigns the initial Q-values to the state-action pairs and does Q-learning with the initial Q-values. The initial Q-values guide the agent to the goal state in the early states of learning, so that Q-teaming updates Q-values efficiently. Therefore it saves exploration time to search for the goal state and has better performance than Q-learning. 1 present Speedup Q-learning algorithm to implement the speedup method. This algorithm is evaluated. in a grid-world domain and compared to Q-teaming.

  • PDF

Q-Learning Policy Design to Speed Up Agent Training (에이전트 학습 속도 향상을 위한 Q-Learning 정책 설계)

  • Yong, Sung-jung;Park, Hyo-gyeong;You, Yeon-hwi;Moon, Il-young
    • Journal of Practical Engineering Education
    • /
    • v.14 no.1
    • /
    • pp.219-224
    • /
    • 2022
  • Q-Learning is a technique widely used as a basic algorithm for reinforcement learning. Q-Learning trains the agent in the direction of maximizing the reward through the greedy action that selects the largest value among the rewards of the actions that can be taken in the current state. In this paper, we studied a policy that can speed up agent training using Q-Learning in Frozen Lake 8×8 grid environment. In addition, the training results of the existing algorithm of Q-learning and the algorithm that gave the attribute 'direction' to agent movement were compared. As a result, it was analyzed that the Q-Learning policy proposed in this paper can significantly increase both the accuracy and training speed compared to the general algorithm.

Object tracking algorithm of Swarm Robot System for using Polygon based Q-learning and parallel SVM

  • Seo, Snag-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.220-224
    • /
    • 2008
  • This paper presents the polygon-based Q-leaning and Parallel SVM algorithm for object search with multiple robots. We organized an experimental environment with one hundred mobile robots, two hundred obstacles, and ten objects. Then we sent the robots to a hallway, where some obstacles were lying about, to search for a hidden object. In experiment, we used four different control methods: a random search, a fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process to determine the next action of the robots, and hexagon-based Q-learning, and dodecagon-based Q-learning and parallel SVM algorithm to enhance the fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process. In this paper, the result show that dodecagon-based Q-learning and parallel SVM algorithm is better than the other algorithm to tracking for object.

A Simulation of Vehicle Parking Distribution System for Local Cultural Festival with Queuing Theory and Q-Learning Algorithm (대기행렬이론과 Q-러닝 알고리즘을 적용한 지역문화축제 진입차량 주차분산 시뮬레이션 시스템)

  • Cho, Youngho;Seo, Yeong Geon;Jeong, Dae-Yul
    • The Journal of Information Systems
    • /
    • v.29 no.2
    • /
    • pp.131-147
    • /
    • 2020
  • Purpose The purpose of this study is to develop intelligent vehicle parking distribution system based on LoRa network at the circumstance of traffic congestion during cultural festival in a local city. This paper proposes a parking dispatch and distribution system using a Q-learning algorithm to rapidly disperse traffics that increases suddenly because of in-bound traffics from the outside of a city in the real-time base as well as to increase parking probability in a parking lot which is widely located in a city. Design/methodology/approach The system get information on realtime-base from the sensor network of IoT (LoRa network). It will contribute to solve the sudden increase in traffic and parking bottlenecks during local cultural festival. We applied the simulation system with Queuing model to the Yudeung Festival in Jinju, Korea. We proposed a Q-learning algorithm that could change the learning policy by setting the acceptability value of each parking lot as a threshold from the Jinju highway IC (Interchange) to the 7 parking lots. LoRa Network platform supports to browse parking resource information to each vehicle in realtime. The system updates Q-table periodically using Q-learning algorithm as soon as get information from parking lots. The Queuing Theory with Poisson arrival distribution is used to get probability distribution function. The Dijkstra algorithm is used to find the shortest distance. Findings This paper suggest a simulation test to verify the efficiency of Q-learning algorithm at the circumstance of high traffic jam in a city during local festival. As a result of the simulation, the proposed algorithm performed well even when each parking lot was somewhat saturated. When an intelligent learning system such as an O-learning algorithm is applied, it is possible to more effectively distribute the vehicle to a lot with a high parking probability when the vehicle inflow from the outside rapidly increases at a specific time, such as a local city cultural festival.

A Function Approximation Method for Q-learning of Reinforcement Learning (강화학습의 Q-learning을 위한 함수근사 방법)

  • 이영아;정태충
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.11
    • /
    • pp.1431-1438
    • /
    • 2004
  • Reinforcement learning learns policies for accomplishing a task's goal by experience through interaction between agent and environment. Q-learning, basis algorithm of reinforcement learning, has the problem of curse of dimensionality and slow learning speed in the incipient stage of learning. In order to solve the problems of Q-learning, new function approximation methods suitable for reinforcement learning should be studied. In this paper, to improve these problems, we suggest Fuzzy Q-Map algorithm that is based on online fuzzy clustering. Fuzzy Q-Map is a function approximation method suitable to reinforcement learning that can do on-line teaming and express uncertainty of environment. We made an experiment on the mountain car problem with fuzzy Q-Map, and its results show that learning speed is accelerated in the incipient stage of learning.

Multi-regional Anti-jamming Communication Scheme Based on Transfer Learning and Q Learning

  • Han, Chen;Niu, Yingtao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3333-3350
    • /
    • 2019
  • The smart jammer launches jamming attacks which degrade the transmission reliability. In this paper, smart jamming attacks based on the communication probability over different channels is considered, and an anti-jamming Q learning algorithm (AQLA) is developed to obtain anti-jamming knowledge for the local region. To accelerate the learning process across multiple regions, a multi-regional intelligent anti-jamming learning algorithm (MIALA) which utilizes transferred knowledge from neighboring regions is proposed. The MIALA algorithm is evaluated through simulations, and the results show that the it is capable of learning the jamming rules and effectively speed up the learning rate of the whole communication region when the jamming rules are similar in the neighboring regions.

Max-Mean N-step Temporal-Difference Learning Using Multi-Step Return (멀티-스텝 누적 보상을 활용한 Max-Mean N-Step 시간차 학습)

  • Hwang, Gyu-Young;Kim, Ju-Bong;Heo, Joo-Seong;Han, Youn-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.5
    • /
    • pp.155-162
    • /
    • 2021
  • n-step TD learning is a combination of Monte Carlo method and one-step TD learning. If appropriate n is selected, n-step TD learning is known as an algorithm that performs better than Monte Carlo method and 1-step TD learning, but it is difficult to select the best values of n. In order to solve the difficulty of selecting the values of n in n-step TD learning, in this paper, using the characteristic that overestimation of Q can improve the performance of initial learning and that all n-step returns have similar values for Q ≈ Q*, we propose a new learning target, which is composed of the maximum and the mean of all k-step returns for 1 ≤ k ≤ n. Finally, in OpenAI Gym's Atari game environment, we compare the proposed algorithm with n-step TD learning and proved that the proposed algorithm is superior to n-step TD learning algorithm.