• Title/Summary/Keyword: DQnA

Search Result 64, Processing Time 0.027 seconds

Performance Analysis of Deep Reinforcement Learning for Crop Yield Prediction (작물 생산량 예측을 위한 심층강화학습 성능 분석)

  • Ohnmar Khin;Sung-Keun Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.99-106
    • /
    • 2023
  • Recently, many studies on crop yield prediction using deep learning technology have been conducted. These algorithms have difficulty constructing a linear map between input data sets and crop prediction results. Furthermore, implementation of these algorithms positively depends on the rate of acquired attributes. Deep reinforcement learning can overcome these limitations. This paper analyzes the performance of DQN, Double DQN and Dueling DQN to improve crop yield prediction. The DQN algorithm retains the overestimation problem. Whereas, Double DQN declines the over-estimations and leads to getting better results. The proposed models achieves these by reducing the falsehood and increasing the prediction exactness.

Development of Semi-Active Control Algorithm Using Deep Q-Network (Deep Q-Network를 이용한 준능동 제어알고리즘 개발)

  • Kim, Hyun-Su;Kang, Joo-Won
    • Journal of Korean Association for Spatial Structures
    • /
    • v.21 no.1
    • /
    • pp.79-86
    • /
    • 2021
  • Control performance of a smart tuned mass damper (TMD) mainly depends on control algorithms. A lot of control strategies have been proposed for semi-active control devices. Recently, machine learning begins to be applied to development of vibration control algorithm. In this study, a reinforcement learning among machine learning techniques was employed to develop a semi-active control algorithm for a smart TMD. The smart TMD was composed of magnetorheological damper in this study. For this purpose, an 11-story building structure with a smart TMD was selected to construct a reinforcement learning environment. A time history analysis of the example structure subject to earthquake excitation was conducted in the reinforcement learning procedure. Deep Q-network (DQN) among various reinforcement learning algorithms was used to make a learning agent. The command voltage sent to the MR damper is determined by the action produced by the DQN. Parametric studies on hyper-parameters of DQN were performed by numerical simulations. After appropriate training iteration of the DQN model with proper hyper-parameters, the DQN model for control of seismic responses of the example structure with smart TMD was developed. The developed DQN model can effectively control smart TMD to reduce seismic responses of the example structure.

Path Planning of Unmanned Aerial Vehicle based Reinforcement Learning using Deep Q Network under Simulated Environment (시뮬레이션 환경에서의 DQN을 이용한 강화 학습 기반의 무인항공기 경로 계획)

  • Lee, Keun Hyoung;Kim, Shin Dug
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.3
    • /
    • pp.127-130
    • /
    • 2017
  • In this research, we present a path planning method for an autonomous flight of unmanned aerial vehicles (UAVs) through reinforcement learning under simulated environment. We design the simulator for reinforcement learning of uav. Also we implement interface for compatibility of Deep Q-Network(DQN) and simulator. In this paper, we perform reinforcement learning through the simulator and DQN, and use Q-learning algorithm, which is a kind of reinforcement learning algorithms. Through experimentation, we verify performance of DQN-simulator. Finally, we evaluated the learning results and suggest path planning strategy using reinforcement learning.

  • PDF

A Distributed Scheduling Algorithm based on Deep Reinforcement Learning for Device-to-Device communication networks (단말간 직접 통신 네트워크를 위한 심층 강화학습 기반 분산적 스케쥴링 알고리즘)

  • Jeong, Moo-Woong;Kim, Lyun Woo;Ban, Tae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1500-1506
    • /
    • 2020
  • In this paper, we study a scheduling problem based on reinforcement learning for overlay device-to-device (D2D) communication networks. Even though various technologies for D2D communication networks using Q-learning, which is one of reinforcement learning models, have been studied, Q-learning causes a tremendous complexity as the number of states and actions increases. In order to solve this problem, D2D communication technologies based on Deep Q Network (DQN) have been studied. In this paper, we thus design a DQN model by considering the characteristics of wireless communication systems, and propose a distributed scheduling scheme based on the DQN model that can reduce feedback and signaling overhead. The proposed model trains all parameters in a centralized manner, and transfers the final trained parameters to all mobiles. All mobiles individually determine their actions by using the transferred parameters. We analyze the performance of the proposed scheme by computer simulation and compare it with optimal scheme, opportunistic selection scheme and full transmission scheme.

The Effect of Segment Size on Quality Selection in DQN-based Video Streaming Services (DQN 기반 비디오 스트리밍 서비스에서 세그먼트 크기가 품질 선택에 미치는 영향)

  • Kim, ISeul;Lim, Kyungshik
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1182-1194
    • /
    • 2018
  • The Dynamic Adaptive Streaming over HTTP(DASH) is envisioned to evolve to meet an increasing demand on providing seamless video streaming services in the near future. The DASH performance heavily depends on the client's adaptive quality selection algorithm that is not included in the standard. The existing conventional algorithms are basically based on a procedural algorithm that is not easy to capture and reflect all variations of dynamic network and traffic conditions in a variety of network environments. To solve this problem, this paper proposes a novel quality selection mechanism based on the Deep Q-Network(DQN) model, the DQN-based DASH Adaptive Bitrate(ABR) mechanism. The proposed mechanism adopts a new reward calculation method based on five major performance metrics to reflect the current conditions of networks and devices in real time. In addition, the size of the consecutive video segment to be downloaded is also considered as a major learning metric to reflect a variety of video encodings. Experimental results show that the proposed mechanism quickly selects a suitable video quality even in high error rate environments, significantly reducing frequency of quality changes compared to the existing algorithm and simultaneously improving average video quality during video playback.

Study on the Development of an Expressway Hard Shoulder Running Algorithm Using Reinforcement Learning (강화학습 기반 고속도로 갓길차로제 운영 알고리즘 개발 연구)

  • Harim Jeong;Sangmin Park;Sungkwan Kang;Ilsoo Yun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.4
    • /
    • pp.63-77
    • /
    • 2023
  • This study applies reinforcement learning to effectively operate expressway hard shoulder running (HSR). An HSR algorithm was developed, and its effectiveness was evaluated using the VISSIM microscopic simulation program. The simulation evaluated two aspects: mobility and safety. The DQN-based HSR algorithm found speed improvement of up to 26 km/h. Compared to the current method, the difference in the number of conflicts was not significant. Considering the results, a DQN-based HSR operation has a clear effect, and it is necessary to consider adjusting the current operational criteria.

Visual Analysis of Deep Q-network

  • Seng, Dewen;Zhang, Jiaming;Shi, Xiaoying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.853-873
    • /
    • 2021
  • In recent years, deep reinforcement learning (DRL) models are enjoying great interest as their success in a variety of challenging tasks. Deep Q-Network (DQN) is a widely used deep reinforcement learning model, which trains an intelligent agent that executes optimal actions while interacting with an environment. This model is well known for its ability to surpass skilled human players across many Atari 2600 games. Although DQN has achieved excellent performance in practice, there lacks a clear understanding of why the model works. In this paper, we present a visual analytics system for understanding deep Q-network in a non-blind matter. Based on the stored data generated from the training and testing process, four coordinated views are designed to expose the internal execution mechanism of DQN from different perspectives. We report the system performance and demonstrate its effectiveness through two case studies. By using our system, users can learn the relationship between states and Q-values, the function of convolutional layers, the strategies learned by DQN and the rationality of decisions made by the agent.

Dueling DQN-based Routing for Dynamic LEO Satellite Networks (동적 저궤도 위성 네트워크를 위한 Dueling DQN 기반 라우팅 기법)

  • Dohyung Kim;Sanghyeon Lee;Heoncheol Lee;Dongshik Won
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.173-183
    • /
    • 2023
  • This paper deals with a routing algorithm which can find the best communication route to a desired point considering disconnected links in the LEO (low earth orbit) satellite networks. If the LEO satellite networks are dynamic, the number and distribution of the disconnected links are varying, which makes the routing problem challenging. To solve the problem, in this paper, we propose a routing method based on Dueling DQN which is one of the reinforcement learning algorithms. The proposed method was successfully conducted and verified by showing improved performance by reducing convergence times and converging more stably compared to other existing reinforcement learning-based routing algorithms.

Mapless Navigation Based on DQN Considering Moving Obstacles, and Training Time Reduction Algorithm (이동 장애물을 고려한 DQN 기반의 Mapless Navigation 및 학습 시간 단축 알고리즘)

  • Yoon, Beomjin;Yoo, Seungryeol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.377-383
    • /
    • 2021
  • Recently, in accordance with the 4th industrial revolution, The use of autonomous mobile robots for flexible logistics transfer is increasing in factories, the warehouses and the service areas, etc. In large factories, many manual work is required to use Simultaneous Localization and Mapping(SLAM), so the need for the improved mobile robot autonomous driving is emerging. Accordingly, in this paper, an algorithm for mapless navigation that travels in an optimal path avoiding fixed or moving obstacles is proposed. For mapless navigation, the robot is trained to avoid fixed or moving obstacles through Deep Q Network (DQN) and accuracy 90% and 93% are obtained for two types of obstacle avoidance, respectively. In addition, DQN requires a lot of learning time to meet the required performance before use. To shorten this, the target size change algorithm is proposed and confirmed the reduced learning time and performance of obstacle avoidance through simulation.

XH-DQN: Fact verification using a combined model of graph transformer and DQN (XH-DQN: 사실 검증을 위한 그래프 Transformer와 DQN 결합 모델)

  • Seo, Mintaek;Na, Seung-Hoon;Shin, Dongwook;Kim, Seon-Hoon;Kang, Inho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.227-232
    • /
    • 2021
  • 사실 검증(Fact verification) 문제는 문서 검색(Document retrieval), 증거 선택(Evidence selection), 증거 검증(Claim verification) 3가지 단계로 구성되어있다. 사실 검증 모델들의 주요 관심사인 증거 검증 단계에서 많은 모델이 제안되는 가운데 증거 선택 단계에 집중하여 강화 학습을 통해 해결한 모델이 제안되었다. 그래프 기반의 모델과 강화 학습 기반의 사실 검증 모델을 소개하고 각 모델을 한국어 사실 검증에 적용해본다. 또한, 두 모델을 같이 사용하여 각 모델의 장점을 가지는 부분을 병렬적으로 결합한 모델의 성능과 증거의 구성 단위에 따른 성능도 비교한다.

  • PDF