• Title/Summary/Keyword: Q learning

Search Result 430, Processing Time 0.036 seconds

Performance Analysis of Deep Reinforcement Learning for Crop Yield Prediction (작물 생산량 예측을 위한 심층강화학습 성능 분석)

  • Ohnmar Khin;Sung-Keun Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.99-106
    • /
    • 2023
  • Recently, many studies on crop yield prediction using deep learning technology have been conducted. These algorithms have difficulty constructing a linear map between input data sets and crop prediction results. Furthermore, implementation of these algorithms positively depends on the rate of acquired attributes. Deep reinforcement learning can overcome these limitations. This paper analyzes the performance of DQN, Double DQN and Dueling DQN to improve crop yield prediction. The DQN algorithm retains the overestimation problem. Whereas, Double DQN declines the over-estimations and leads to getting better results. The proposed models achieves these by reducing the falsehood and increasing the prediction exactness.

Reinforcement Learning-Based Adaptive Traffic Signal Control considering Vehicles and Pedestrians in Intersection (차량과 보행자를 고려한 강화학습 기반 적응형 교차로 신호제어 연구)

  • Jong-Min Kim;Sun-Yong Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.143-148
    • /
    • 2024
  • Traffic congestion has caused issues in various forms such as the environment and economy. Recently, an intelligent transport system (ITS) using artificial intelligence (AI) has been focused so as to alleviate the traffic congestion problem. In this paper, we propose a reinforcement learning-based traffic signal control algorithm that can smooth the flow of traffic while reducing discomfort levels of drivers and pedestrians. By applying the proposed algorithm, it was confirmed that the discomfort levels of drivers and pedestrians can be significantly reduced compared to the existing fixed signal control system, and that the performance gap increases as the number of roads at the intersection increases.

A Reinforcement Learning Approach to Collaborative Filtering Considering Time-sequence of Ratings (평가의 시간 순서를 고려한 강화 학습 기반 협력적 여과)

  • Lee, Jung-Kyu;Oh, Byong-Hwa;Yang, Ji-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.31-36
    • /
    • 2012
  • In recent years, there has been increasing interest in recommender systems which provide users with personalized suggestions for products or services. In particular, researches of collaborative filtering analyzing relations between users and items has become more active because of the Netflix Prize competition. This paper presents the reinforcement learning approach for collaborative filtering. By applying reinforcement learning techniques to the movie rating, we discovered the connection between a time sequence of past ratings and current ratings. For this, we first formulated the collaborative filtering problem as a Markov Decision Process. And then we trained the learning model which reflects the connection between the time sequence of past ratings and current ratings using Q-learning. The experimental results indicate that there is a significant effect on current ratings by the time sequence of past ratings.

An Effective Adaptive Dialogue Strategy Using Reinforcement Loaming (강화 학습법을 이용한 효과적인 적응형 대화 전략)

  • Kim, Won-Il;Ko, Young-Joong;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.33-40
    • /
    • 2008
  • In this paper, we propose a method to enhance adaptability in a dialogue system using the reinforcement learning that reduces response errors by trials and error-search similar to a human dialogue process. The adaptive dialogue strategy means that the dialogue system improves users' satisfaction and dialogue efficiency by loaming users' dialogue styles. To apply the reinforcement learning to the dialogue system, we use a main-dialogue span and sub-dialogue spans as the mathematic application units, and evaluate system usability by using features; success or failure, completion time, and error rate in sub-dialogue and the satisfaction in main-dialogue. In addition, we classify users' groups into beginners and experts to increase users' convenience in training steps. Then, we apply reinforcement learning policies according to users' groups. In the experiments, we evaluated the performance of the proposed method on the individual reinforcement learning policy and group's reinforcement learning policy.

Learning-Backoff based Wireless Channel Access for Tactical Airborne Networks (차세대 공중전술네트워크를 위한 Learning-Backoff 기반 무선 채널 접속 방법)

  • Byun, JungHun;Park, Sangjun;Yoon, Joonhyeok;Kim, Yongchul;Lee, Wonwoo;Jo, Ohyun;Joo, Taehwan
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.1
    • /
    • pp.12-19
    • /
    • 2021
  • For strengthening the national defense, the function of tactical network is essential. tactics and strategies in wartime situations are based on numerous information. Therefore, various reconnaissance devices and resources are used to collect a huge amount of information, and they transmit the information through tactical networks. In tactical networks that which use contention based channel access scheme, high-speed nodes such as recon aircraft may have performance degradation problems due to unnecessary channel occupation. In this paper, we propose a learning-backoff method, which empirically learns the size of the contention window to determine channel access time. The proposed method shows that the network throughput can be increased up to 25% as the number of high-speed mobility nodes are increases.

Weight Adjustment Scheme Based on Hop Count in Q-routing for Software Defined Networks-enabled Wireless Sensor Networks

  • Godfrey, Daniel;Jang, Jinsoo;Kim, Ki-Il
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.1
    • /
    • pp.22-30
    • /
    • 2022
  • The reinforcement learning algorithm has proven its potential in solving sequential decision-making problems under uncertainties, such as finding paths to route data packets in wireless sensor networks. With reinforcement learning, the computation of the optimum path requires careful definition of the so-called reward function, which is defined as a linear function that aggregates multiple objective functions into a single objective to compute a numerical value (reward) to be maximized. In a typical defined linear reward function, the multiple objectives to be optimized are integrated in the form of a weighted sum with fixed weighting factors for all learning agents. This study proposes a reinforcement learning -based routing protocol for wireless sensor network, where different learning agents prioritize different objective goals by assigning weighting factors to the aggregated objectives of the reward function. We assign appropriate weighting factors to the objectives in the reward function of a sensor node according to its hop-count distance to the sink node. We expect this approach to enhance the effectiveness of multi-objective reinforcement learning for wireless sensor networks with a balanced trade-off among competing parameters. Furthermore, we propose SDN (Software Defined Networks) architecture with multiple controllers for constant network monitoring to allow learning agents to adapt according to the dynamics of the network conditions. Simulation results show that our proposed scheme enhances the performance of wireless sensor network under varied conditions, such as the node density and traffic intensity, with a good trade-off among competing performance metrics.

Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity (오프 폴리시 강화학습에서 몬테 칼로와 시간차 학습의 균형을 사용한 적은 샘플 복잡도)

  • Kim, Chayoung;Park, Seohee;Lee, Woosik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.1-7
    • /
    • 2020
  • Deep neural networks(DNN), which are used as approximation functions in reinforcement learning (RN), theoretically can be attributed to realistic results. In empirical benchmark works, time difference learning (TD) shows better results than Monte-Carlo learning (MC). However, among some previous works show that MC is better than TD when the reward is very rare or delayed. Also, another recent research shows when the information observed by the agent from the environment is partial on complex control works, it indicates that the MC prediction is superior to the TD-based methods. Most of these environments can be regarded as 5-step Q-learning or 20-step Q-learning, where the experiment continues without long roll-outs for alleviating reduce performance degradation. In other words, for networks with a noise, a representative network that is regardless of the controlled roll-outs, it is better to learn MC, which is robust to noisy rewards than TD, or almost identical to MC. These studies provide a break with that TD is better than MC. These recent research results show that the way combining MC and TD is better than the theoretical one. Therefore, in this study, based on the results shown in previous studies, we attempt to exploit a random balance with a mixture of TD and MC in RL without any complicated formulas by rewards used in those studies do. Compared to the DQN using the MC and TD random mixture and the well-known DQN using only the TD-based learning, we demonstrate that a well-performed TD learning are also granted special favor of the mixture of TD and MC through an experiments in OpenAI Gym.

Labeling Q-learning with SOM

  • Lee, Haeyeon;Kenichi Abe;Hiroyuki Kamaya
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.35.3-35
    • /
    • 2002
  • Reinforcement Learning (RL) is one of machine learning methods and an RL agent autonomously learns the action selection policy by interactions with its environment. At the beginning of RL research, it was limited to problems in environments assumed to be Markovian Decision Process (MDP). However in practical problems, the agent suffers from the incomplete perception, i.e., the agent observes the state of the environments, but these observations include incomplete information of the state. This problem is formally modeled by Partially Observable MDP (POMDP). One of the possible approaches to POMDPS is to use historical nformation to estimate states. The problem of these approaches is how t..

  • PDF

Information Theoretic Learning with Maximizing Tsallis Entropy

  • Aruga, Nobuhide;Tanaka, Masaru
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.810-813
    • /
    • 2002
  • We present the information theoretic learning based on the Tsallis entropy maximization principle for various q. The Tsallis entropy is one of the generalized entropies and is a canonical entropy in the sense of physics. Further, we consider the dependency of the learning on the parameter $\sigma$, which is a standard deviation of an assumed a priori distribution of samples such as Parzen window.

  • PDF

Optimal Route Finding Algorithms based Reinforcement Learning (강화학습을 이용한 주행경로 최적화 알고리즘 개발)

  • 정희석;이종수
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.157-161
    • /
    • 2003
  • 본 논문에서는 차량의 주행경로 최적화를 위해 강화학습 개념을 적용하고자 한다. 강화학습의 특징은 관심 대상에 대한 구체적인 지배 규칙의 정보 없이도 최적화된 행동 방식을 학습시킬 수 있는 특징이 있어서, 실제 차량의 주행경로와 같이 여러 교통정보 및 시간에 따른 변화 등에 대한 복잡한 고려가 필요한 시스템에 적합하다. 또한 학습을 위한 강화(보상, 벌칙)의 정도 및 기준을 조절해 즘으로써 다양한 최적주행경로를 제공할 수 있다. 따라서, 본 논문에서는 강화학습 알고리즘을 이용하여 다양한 최적주행경로를 제공해 주는 시스템을 구현한다.

  • PDF