• Title/Summary/Keyword: deep Q learning

Search Result 85, Processing Time 0.042 seconds

Self-Imitation Learning을 이용한 개선된 Deep Q-Network 알고리즘 (Improved Deep Q-Network Algorithm Using Self-Imitation Learning)

  • 선우영민;이원창
    • 전기전자학회논문지
    • /
    • 제25권4호
    • /
    • pp.644-649
    • /
    • 2021
  • Self-Imitation Learning은 간단한 비활성 정책 actor-critic 알고리즘으로써 에이전트가 과거의 좋은 경험을 활용하여 최적의 정책을 찾을 수 있도록 해준다. 그리고 actor-critic 구조를 갖는 강화학습 알고리즘에 결합되어 다양한 환경들에서 알고리즘의 상당한 개선을 보여주었다. 하지만 Self-Imitation Learning이 강화학습에 큰 도움을 준다고 하더라도 그 적용 분야는 actor-critic architecture를 가지는 강화학습 알고리즘으로 제한되어 있다. 본 논문에서 Self-Imitation Learning의 알고리즘을 가치 기반 강화학습 알고리즘인 DQN에 적용하는 방법을 제안하고, Self-Imitation Learning이 적용된 DQN 알고리즘의 학습을 다양한 환경에서 진행한다. 아울러 그 결과를 기존의 결과와 비교함으로써 Self-Imitation Leaning이 DQN에도 적용될 수 있으며 DQN의 성능을 개선할 수 있음을 보인다.

Enhanced Machine Learning Algorithms: Deep Learning, Reinforcement Learning, and Q-Learning

  • Park, Ji Su;Park, Jong Hyuk
    • Journal of Information Processing Systems
    • /
    • 제16권5호
    • /
    • pp.1001-1007
    • /
    • 2020
  • In recent years, machine learning algorithms are continuously being used and expanded in various fields, such as facial recognition, signal processing, personal authentication, and stock prediction. In particular, various algorithms, such as deep learning, reinforcement learning, and Q-learning, are continuously being improved. Among these algorithms, the expansion of deep learning is rapidly changing. Nevertheless, machine learning algorithms have not yet been applied in several fields, such as personal authentication technology. This technology is an essential tool in the digital information era, walking recognition technology as promising biometrics, and technology for solving state-space problems. Therefore, algorithm technologies of deep learning, reinforcement learning, and Q-learning, which are typical machine learning algorithms in various fields, such as agricultural technology, personal authentication, wireless network, game, biometric recognition, and image recognition, are being improved and expanded in this paper.

Q-learning 알고리즘이 성능 향상을 위한 CEE(CrossEntropyError)적용 (Applying CEE (CrossEntropyError) to improve performance of Q-Learning algorithm)

  • 강현구;서동성;이병석;강민수
    • 한국인공지능학회지
    • /
    • 제5권1호
    • /
    • pp.1-9
    • /
    • 2017
  • Recently, the Q-Learning algorithm, which is one kind of reinforcement learning, is mainly used to implement artificial intelligence system in combination with deep learning. Many research is going on to improve the performance of Q-Learning. Therefore, purpose of theory try to improve the performance of Q-Learning algorithm. This Theory apply Cross Entropy Error to the loss function of Q-Learning algorithm. Since the mean squared error used in Q-Learning is difficult to measure the exact error rate, the Cross Entropy Error, known to be highly accurate, is applied to the loss function. Experimental results show that the success rate of the Mean Squared Error used in the existing reinforcement learning was about 12% and the Cross Entropy Error used in the deep learning was about 36%. The success rate was shown.

Visual Analysis of Deep Q-network

  • Seng, Dewen;Zhang, Jiaming;Shi, Xiaoying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권3호
    • /
    • pp.853-873
    • /
    • 2021
  • In recent years, deep reinforcement learning (DRL) models are enjoying great interest as their success in a variety of challenging tasks. Deep Q-Network (DQN) is a widely used deep reinforcement learning model, which trains an intelligent agent that executes optimal actions while interacting with an environment. This model is well known for its ability to surpass skilled human players across many Atari 2600 games. Although DQN has achieved excellent performance in practice, there lacks a clear understanding of why the model works. In this paper, we present a visual analytics system for understanding deep Q-network in a non-blind matter. Based on the stored data generated from the training and testing process, four coordinated views are designed to expose the internal execution mechanism of DQN from different perspectives. We report the system performance and demonstrate its effectiveness through two case studies. By using our system, users can learn the relationship between states and Q-values, the function of convolutional layers, the strategies learned by DQN and the rationality of decisions made by the agent.

Applying Deep Reinforcement Learning to Improve Throughput and Reduce Collision Rate in IEEE 802.11 Networks

  • Ke, Chih-Heng;Astuti, Lia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권1호
    • /
    • pp.334-349
    • /
    • 2022
  • The effectiveness of Wi-Fi networks is greatly influenced by the optimization of contention window (CW) parameters. Unfortunately, the conventional approach employed by IEEE 802.11 wireless networks is not scalable enough to sustain consistent performance for the increasing number of stations. Yet, it is still the default when accessing channels for single-users of 802.11 transmissions. Recently, there has been a spike in attempts to enhance network performance using a machine learning (ML) technique known as reinforcement learning (RL). Its advantage is interacting with the surrounding environment and making decisions based on its own experience. Deep RL (DRL) uses deep neural networks (DNN) to deal with more complex environments (such as continuous state spaces or actions spaces) and to get optimum rewards. As a result, we present a new approach of CW control mechanism, which is termed as contention window threshold (CWThreshold). It uses the DRL principle to define the threshold value and learn optimal settings under various network scenarios. We demonstrate our proposed method, known as a smart exponential-threshold-linear backoff algorithm with a deep Q-learning network (SETL-DQN). The simulation results show that our proposed SETL-DQN algorithm can effectively improve the throughput and reduce the collision rates.

A fast and simplified crack width quantification method via deep Q learning

  • Xiong Peng;Kun Zhou;Bingxu Duan;Xingu Zhong;Chao Zhao;Tianyu Zhang
    • Smart Structures and Systems
    • /
    • 제32권4호
    • /
    • pp.219-233
    • /
    • 2023
  • Crack width is an important indicator to evaluate the health condition of the concrete structure. The crack width is measured by manual using crack width gauge commonly, which is time-consuming and laborious. In this paper, we have proposed a fast and simplified crack width quantification method via deep Q learning and geometric calculation. Firstly, the crack edge is extracted by using U-Net network and edge detection operator. Then, the intelligent decision of is made by the deep Q learning model. Further, the geometric calculation method based on endpoint and curvature extreme point detection is proposed. Finally, a case study is carried out to demonstrate the effectiveness of the proposed method, achieving high precision in the real crack width quantification.

IoT 네트워크에서의 심층 강화학습 기반 저전력 버퍼 관리 기법에 관한 연구 (A Research on Low-power Buffer Management Algorithm based on Deep Q-Learning approach for IoT Networks)

  • 송태원
    • 사물인터넷융복합논문지
    • /
    • 제8권4호
    • /
    • pp.1-7
    • /
    • 2022
  • IoT 네트워크에서 클러스터와 싱크 노드 사이의 게이트웨이 역할을 하는 클러스터 헤드의 전력 관리는 IoT 단말의 수가 증가함에 따라 점점 더 중요해지고 있다. 특히 클러스터 헤드가 이동성을 가진 무선 단말인 경우, IoT 네트워크의 수명을 위하여 전력 소모를 최소화할 필요가 있다. 또한 IoT 네트워크에서의 전송 딜레이는 IoT 네트워크에서의 빠른 정보 수집을 위한 주요한 척도 중 하나이다. 본 논문에서는 IoT 네트워크에서 정보의 전송 딜레이를 고려한 저전력 버퍼 관리 기법을 제안한다. 제안하는 기법에서는 심층 강화학습 방법에서 사용되는 심층 Q 학습(Deep Q learning)를 사용하여 수신된 패킷을 포워딩하거나 폐기함으로써 전송 딜레이를 줄이면서도 소비 전력을 절약할 수 있다. 제안한 알고리즘은 비교에 사용된 기존 버퍼 관리 기법과 비교하여 Slotted ALOHA 프로토콜 기준 소모 전력 및 딜레이를 개선함을 보였다.

Deep Q-Network를 이용한 준능동 제어알고리즘 개발 (Development of Semi-Active Control Algorithm Using Deep Q-Network)

  • 김현수;강주원
    • 한국공간구조학회논문집
    • /
    • 제21권1호
    • /
    • pp.79-86
    • /
    • 2021
  • Control performance of a smart tuned mass damper (TMD) mainly depends on control algorithms. A lot of control strategies have been proposed for semi-active control devices. Recently, machine learning begins to be applied to development of vibration control algorithm. In this study, a reinforcement learning among machine learning techniques was employed to develop a semi-active control algorithm for a smart TMD. The smart TMD was composed of magnetorheological damper in this study. For this purpose, an 11-story building structure with a smart TMD was selected to construct a reinforcement learning environment. A time history analysis of the example structure subject to earthquake excitation was conducted in the reinforcement learning procedure. Deep Q-network (DQN) among various reinforcement learning algorithms was used to make a learning agent. The command voltage sent to the MR damper is determined by the action produced by the DQN. Parametric studies on hyper-parameters of DQN were performed by numerical simulations. After appropriate training iteration of the DQN model with proper hyper-parameters, the DQN model for control of seismic responses of the example structure with smart TMD was developed. The developed DQN model can effectively control smart TMD to reduce seismic responses of the example structure.

심층강화학습 기반 자율주행차량의 차로변경 방법론 (Lane Change Methodology for Autonomous Vehicles Based on Deep Reinforcement Learning)

  • 박다윤;배상훈;;박부기;정보경
    • 한국ITS학회 논문지
    • /
    • 제22권1호
    • /
    • pp.276-290
    • /
    • 2023
  • 현재 국내에서는 자율주행차량의 상용화를 목표로 다양한 노력을 기울이고 있으며 자율주행차량이 운영 가이드라인에 따라 안전하고 신속하게 주행할 수 있는 연구들이 대두되고 있다. 본 연구는 자율주행차량의 경로탐색을 미시적인 관점으로 바라보며 Deep Q-Learning을 통해 자율주행차량의 차로변경을 학습시켜 효율성을 입증하고자 한다. 이를 위해 SUMO를 사용하였으며, 시나리오는 출발지에서 랜덤 차로로 출발하여 목적지의 3차로까지 차로변경을 통해 우회전하는 것으로 설정하였다. 연구 결과 시뮬레이션 기반의 차로변경과 Deep Q-Learning을 적용한 시뮬레이션 기반의 차로변경으로 구분하여 분석하였다. 평균 통행 속도는 Deep Q-Learning을 적용한 시뮬레이션의 경우가 적용하지 않은 경우에 비해 약 40% 향상되었으며 평균 대기 시간은 약 2초, 평균 대기 행렬 길이는 약 2.3대 감소하였다.

Application of Deep Recurrent Q Network with Dueling Architecture for Optimal Sepsis Treatment Policy

  • Do, Thanh-Cong;Yang, Hyung Jeong;Ho, Ngoc-Huynh
    • 스마트미디어저널
    • /
    • 제10권2호
    • /
    • pp.48-54
    • /
    • 2021
  • Sepsis is one of the leading causes of mortality globally, and it costs billions of dollars annually. However, treating septic patients is currently highly challenging, and more research is needed into a general treatment method for sepsis. Therefore, in this work, we propose a reinforcement learning method for learning the optimal treatment strategies for septic patients. We model the patient physiological time series data as the input for a deep recurrent Q-network that learns reliable treatment policies. We evaluate our model using an off-policy evaluation method, and the experimental results indicate that it outperforms the physicians' policy, reducing patient mortality up to 3.04%. Thus, our model can be used as a tool to reduce patient mortality by supporting clinicians in making dynamic decisions.