• Title/Summary/Keyword: Q-Learning algorithm

Search Result 155, Processing Time 0.024 seconds

Optimization of Stock Trading System based on Multi-Agent Q-Learning Framework (다중 에이전트 Q-학습 구조에 기반한 주식 매매 시스템의 최적화)

  • Kim, Yu-Seop;Lee, Jae-Won;Lee, Jong-Woo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.207-212
    • /
    • 2004
  • This paper presents a reinforcement learning framework for stock trading systems. Trading system parameters are optimized by Q-learning algorithm and neural networks are adopted for value approximation. In this framework, cooperative multiple agents are used to efficiently integrate global trend prediction and local trading strategy for obtaining better trading performance. Agents Communicate With Others Sharing training episodes and learned policies, while keeping the overall scheme of conventional Q-learning. Experimental results on KOSPI 200 show that a trading system based on the proposed framework outperforms the market average and makes appreciable profits. Furthermore, in view of risk management, the system is superior to a system trained by supervised learning.

Gain Tuning for SMCSPO of Robot Arm with Q-Learning (Q-Learning을 사용한 로봇팔의 SMCSPO 게인 튜닝)

  • Lee, JinHyeok;Kim, JaeHyung;Lee, MinCheol
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.221-229
    • /
    • 2022
  • Sliding mode control (SMC) is a robust control method to control a robot arm with nonlinear properties. A high switching gain of SMC causes chattering problems, although the SMC allows the adequate control performance by giving high switching gain, without the exact robot model containing nonlinear and uncertainty terms. In order to solve this problem, SMC with sliding perturbation observer (SMCSPO) has been researched, where the method can reduce the chattering by compensating the perturbation, which is estimated by the observer, and then choosing a lower switching control gain of SMC. However, optimal gain tuning is necessary to get a better tracking performance and reducing a chattering. This paper proposes a method that the Q-learning automatically tunes the control gains of SMCSPO with an iterative operation. In this tuning method, the rewards of reinforcement learning (RL) are set minus tracking errors of states, and the action of RL is a change of control gain to maximize rewards whenever the iteration number of movements increases. The simple motion test for a 7-DOF robot arm was simulated in MATLAB program to prove this RL tuning algorithm. The simulation showed that this method can automatically tune the control gains for SMCSPO.

Improvement of the Gonu game using progressive deepening in reinforcement learning (강화학습에서 점진적인 심화를 이용한 고누게임의 개선)

  • Shin, YongWoo
    • Journal of Korea Game Society
    • /
    • v.20 no.6
    • /
    • pp.23-30
    • /
    • 2020
  • There are many cases in the game. So, Game have to learn a lot. This paper uses reinforcement learning to improve the learning speed. However, because reinforcement learning has many cases, it slows down early in learning. So, the speed of learning was improved by using the minimax algorithm. In order to compare the improved performance, a Gonu game was produced and tested. As for the experimental results, the win rate was high, but the result of a tie occurred. The game tree was further explored using progressive deepening to reduce tie cases and win rate has improved by about 75%.

R-Trader: An Automatic Stock Trading System based on Reinforcement learning (R-Trader: 강화 학습에 기반한 자동 주식 거래 시스템)

  • 이재원;김성동;이종우;채진석
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.785-794
    • /
    • 2002
  • Automatic stock trading systems should be able to solve various kinds of optimization problems such as market trend prediction, stock selection, and trading strategies, in a unified framework. But most of the previous trading systems based on supervised learning have a limit in the ultimate performance, because they are not mainly concerned in the integration of those subproblems. This paper proposes a stock trading system, called R-Trader, based on reinforcement teaming, regarding the process of stock price changes as Markov decision process (MDP). Reinforcement learning is suitable for Joint optimization of predictions and trading strategies. R-Trader adopts two popular reinforcement learning algorithms, temporal-difference (TD) and Q, for selecting stocks and optimizing other trading parameters respectively. Technical analysis is also adopted to devise the input features of the system and value functions are approximated by feedforward neural networks. Experimental results on the Korea stock market show that the proposed system outperforms the market average and also a simple trading system trained by supervised learning both in profit and risk management.

Action Selection by Voting with Loaming Capability for a Behavior-based Control Approach (행동기반 제어방식을 위한 득점과 학습을 통한 행동선택기법)

  • Jeong, S.M.;Oh, S.R.;Yoon, D.Y.;You, B.J.;Chung, C.C.
    • Proceedings of the KIEE Conference
    • /
    • 2002.11c
    • /
    • pp.163-168
    • /
    • 2002
  • The voting algorithm for action selection performs self-improvement by Reinforcement learning algorithm in the dynamic environment. The proposed voting algorithm improves the navigation of the robot by adapting the eligibility of the behaviors and determining the Command Set Generator (CGS). The Navigator that using a proposed voting algorithm corresponds to the CGS for giving the weight values and taking the reward values. It is necessary to decide which Command Set control the mobile robot at given time and to select among the candidate actions. The Command Set was learnt online by means as Q-learning. Action Selector compares Q-values of Navigator with Heterogeneous behaviors. Finally, real-world experimentation was carried out. Results show the good performance for the selection on command set as well as the convergence of Q-value.

  • PDF

Adaptive Packet Scheduling Algorithm in IoT environment (IoT 환경에서의 적응적 패킷 스케줄링 알고리즘)

  • Kim, Dong-Hyun;Lim, Hwan-Hee;Lee, Byung-Jun;Kim, Kyung-Tae;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.07a
    • /
    • pp.15-16
    • /
    • 2018
  • 본 논문에서는 다수의 센서 노드로 구성된 Internet of Things (IoT) 환경에서 새로운 환경에 대해 적응하는데 걸리는 시간을 줄이기 위한 새로운 스케줄링 기법을 제안한다. IoT 환경에서는 데이터 수집 및 전송 패턴이 사전에 정의되어 있지 않기 때문에 기존 정적인 Packet scheduling 기법으로는 한계가 있다. Q-learning은 네트워크 환경에 대한 사전지식 없이도 반복적 학습을 통해 Scheduling policy를 확립할 수 있다. 본 논문에서는 기존 Q-learning 스케줄링 기법을 기반으로 각 큐의 패킷 도착률에 대한 bound 값을 이용해 Q-table과 Reward table을 초기화 하는 새로운 Q-learning 스케줄링 기법을 제안한다. 시뮬레이션 결과 기존 기법에 비해 변화하는 패킷 도착률 및 서비스 요구조건에 적응하는데 걸리는 시간이 감소하였다.

  • PDF

Behavior Learning and Evolution of Swarm Robot System using Q-learning and Cascade SVM (Q-learning과 Cascade SVM을 이용한 군집로봇의 행동학습 및 진화)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.279-284
    • /
    • 2009
  • In swarm robot systems, each robot must behaves by itself according to the its states and environments, and if necessary, must cooperates with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, reinforcement learning method using many SVM based on structural risk minimization and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. By distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning that basis of Cascade SVM is adopted in this paper.

Barycentric Approximator for Reinforcement Learning Control

  • Whang Cho
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.3 no.1
    • /
    • pp.33-42
    • /
    • 2002
  • Recently, various experiments to apply reinforcement learning method to the self-learning intelligent control of continuous dynamic system have been reported in the machine learning related research community. The reports have produced mixed results of some successes and some failures, and show that the success of reinforcement learning method in application to the intelligent control of continuous control systems depends on the ability to combine proper function approximation method with temporal difference methods such as Q-learning and value iteration. One of the difficulties in using function approximation method in connection with temporal difference method is the absence of guarantee for the convergence of the algorithm. This paper provides a proof of convergence of a particular function approximation method based on \"barycentric interpolator\" which is known to be computationally more efficient than multilinear interpolation .

A DASH System Using the A3C-based Deep Reinforcement Learning (A3C 기반의 강화학습을 사용한 DASH 시스템)

  • Choi, Minje;Lim, Kyungshik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.297-307
    • /
    • 2022
  • The simple procedural segment selection algorithm commonly used in Dynamic Adaptive Streaming over HTTP (DASH) reveals severe weakness to provide high-quality streaming services in the integrated mobile networks of various wired and wireless links. A major issue could be how to properly cope with dynamically changing underlying network conditions. The key to meet it should be to make the segment selection algorithm much more adaptive to fluctuation of network traffics. This paper presents a system architecture that replaces the existing procedural segment selection algorithm with a deep reinforcement learning algorithm based on the Asynchronous Advantage Actor-Critic (A3C). The distributed A3C-based deep learning server is designed and implemented to allow multiple clients in different network conditions to stream videos simultaneously, collect learning data quickly, and learn asynchronously, resulting in greatly improved learning speed as the number of video clients increases. The performance analysis shows that the proposed algorithm outperforms both the conventional DASH algorithm and the Deep Q-Network algorithm in terms of the user's quality of experience and the speed of deep learning.

IRSML: An intelligent routing algorithm based on machine learning in software defined wireless networking

  • Duong, Thuy-Van T.;Binh, Le Huu
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.733-745
    • /
    • 2022
  • In software-defined wireless networking (SDWN), the optimal routing technique is one of the effective solutions to improve its performance. This routing technique is done by many different methods, with the most common using integer linear programming problem (ILP), building optimal routing metrics. These methods often only focus on one routing objective, such as minimizing the packet blocking probability, minimizing end-to-end delay (EED), and maximizing network throughput. It is difficult to consider multiple objectives concurrently in a routing algorithm. In this paper, we investigate the application of machine learning to control routing in the SDWN. An intelligent routing algorithm is then proposed based on the machine learning to improve the network performance. The proposed algorithm can optimize multiple routing objectives. Our idea is to combine supervised learning (SL) and reinforcement learning (RL) methods to discover new routes. The SL is used to predict the performance metrics of the links, including EED quality of transmission (QoT), and packet blocking probability (PBP). The routing is done by the RL method. We use the Q-value in the fundamental equation of the RL to store the PBP, which is used for the aim of route selection. Concurrently, the learning rate coefficient is flexibly changed to determine the constraints of routing during learning. These constraints include QoT and EED. Our performance evaluations based on OMNeT++ have shown that the proposed algorithm has significantly improved the network performance in terms of the QoT, EED, packet delivery ratio, and network throughput compared with other well-known routing algorithms.