• Title/Summary/Keyword: deep q-learning

Search Result 85, Processing Time 0.02 seconds

Application of Deep Learning: A Review for Firefighting

  • Shaikh, Muhammad Khalid
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.73-78
    • /
    • 2022
  • The aim of this paper is to investigate the prevalence of Deep Learning in the literature on Fire & Rescue Service. It is found that deep learning techniques are only beginning to benefit the firefighters. The popular areas where deep learning techniques are making an impact are situational awareness, decision making, mental stress, injuries, well-being of the firefighter such as his sudden fall, inability to move and breathlessness, path planning by the firefighters while getting to an fire scene, wayfinding, tracking firefighters, firefighter physical fitness, employment, prediction of firefighter intervention, firefighter operations such as object recognition in smoky areas, firefighter efficacy, smart firefighting using edge computing, firefighting in teams, and firefighter clothing and safety. The techniques that were found applied in firefighting were Deep learning, Traditional K-Means clustering with engineered time and frequency domain features, Convolutional autoencoders, Long Short-Term Memory (LSTM), Deep Neural Networks, Simulation, VR, ANN, Deep Q Learning, Deep learning based on conditional generative adversarial networks, Decision Trees, Kalman Filters, Computational models, Partial Least Squares, Logistic Regression, Random Forest, Edge computing, C5 Decision Tree, Restricted Boltzmann Machine, Reinforcement Learning, and Recurrent LSTM. The literature review is centered on Firefighters/firemen not involved in wildland fires. The focus was also not on the fire itself. It must also be noted that several deep learning techniques such as CNN were mostly used in fire behavior, fire imaging and identification as well. Those papers that deal with fire behavior were also not part of this literature review.

Research on Unmanned Aerial Vehicle Mobility Model based on Reinforcement Learning (강화학습 기반 무인항공기 이동성 모델에 관한 연구)

  • Kyoung Hun Kim;Min Kyu Cho;Chang Young Park;Jeongho Kim;Soo Hyun Kim;Young Ghyu Sun;Jin Young Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.33-39
    • /
    • 2023
  • Recently, reinforcement learning has been used to improve the communication performance of flying ad-hoc networks (FANETs) and to design mobility models. Mobility model is a key factor for predicting and controlling the movement of unmmaned aerial vehicle (UAVs). In this paper, we designed and analyzed the performance of Q-learning with fourier basis function approximation and Deep-Q Network (DQN) models for optimal path finding in a three-dimensional virtual environment where UAVs operate. The experimental results show that the DQN model is more suitable for optimal path finding than the Q-learning model in a three-dimensional virtual environment.

Cooperative Robot for Table Balancing Using Q-learning (테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발)

  • Kim, Yewon;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.

Generation of ship's passage plan based on deep reinforcement learning (심층 강화학습 기반의 선박 항로계획 수립)

  • Hyeong-Tak Lee;Hyun Yang;Ik-Soon Cho
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.11a
    • /
    • pp.230-231
    • /
    • 2023
  • This study proposes a deep reinforcement learning-based algorithm to automatically generate a ship's passage plan. First, Busan Port and Gwangyang Port were selected as target areas, and a container ship with a draft of 16m was designated as the target vessel. The experimental results showed that the ship's passage plan generated using deep reinforcement learning was more efficient than the Q-learning-based algorithm used in previous research. This algorithm presents a method to generate a ship's passage plan automatically and can contribute to improving maritime safety and efficiency.

  • PDF

DEEP LEARNING APPROACH FOR SOLVING A QUADRATIC MATRIX EQUATION

  • Kim, Garam;Kim, Hyun-Min
    • East Asian mathematical journal
    • /
    • v.38 no.1
    • /
    • pp.95-105
    • /
    • 2022
  • In this paper, we consider a quadratic matrix equation Q(X) = AX2 + BX + C = 0 where A, B, C ∈ ℝn×n. A new approach is proposed to find solutions of Q(X), using the novel structure of the information processing system. We also present some numerical experimetns with Artificial Neural Network.

Comparison of Activation Functions of Reinforcement Learning in OpenAI Gym Environments (OpenAI Gym 환경에서 강화학습의 활성화함수 비교 분석)

  • Myung-Ju Kang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.25-26
    • /
    • 2023
  • 본 논문에서는 OpenAI Gym 환경에서 제공하는 CartPole-v1에 대해 강화학습을 통해 에이전트를 학습시키고, 학습에 적용되는 활성화함수의 성능을 비교분석하였다. 본 논문에서 적용한 활성화함수는 Sigmoid, ReLU, ReakyReLU 그리고 softplus 함수이며, 각 활성화함수를 DQN(Deep Q-Networks) 강화학습에 적용했을 때 보상 값을 비교하였다. 실험결과 ReLU 활성화함수를 적용하였을 때의 보상이 가장 높은 것을 알 수 있었다.

  • PDF

Deep reinforcement learning for optimal life-cycle management of deteriorating regional bridges using double-deep Q-networks

  • Xiaoming, Lei;You, Dong
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.571-582
    • /
    • 2022
  • Optimal life-cycle management is a challenging issue for deteriorating regional bridges. Due to the complexity of regional bridge structural conditions and a large number of inspection and maintenance actions, decision-makers generally choose traditional passive management strategies. They are less efficiency and cost-effectiveness. This paper suggests a deep reinforcement learning framework employing double-deep Q-networks (DDQNs) to improve the life-cycle management of deteriorating regional bridges to tackle these problems. It could produce optimal maintenance plans considering restrictions to maximize maintenance cost-effectiveness to the greatest extent possible. DDQNs method could handle the problem of the overestimation of Q-values in the Nature DQNs. This study also identifies regional bridge deterioration characteristics and the consequence of scheduled maintenance from years of inspection data. To validate the proposed method, a case study containing hundreds of bridges is used to develop optimal life-cycle management strategies. The optimization solutions recommend fewer replacement actions and prefer preventative repair actions when bridges are damaged or are expected to be damaged. By employing the optimal life-cycle regional maintenance strategies, the conditions of bridges can be controlled to a good level. Compared to the nature DQNs, DDQNs offer an optimized scheme containing fewer low-condition bridges and a more costeffective life-cycle management plan.

Performance Analysis of Deep Reinforcement Learning for Crop Yield Prediction (작물 생산량 예측을 위한 심층강화학습 성능 분석)

  • Ohnmar Khin;Sung-Keun Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.99-106
    • /
    • 2023
  • Recently, many studies on crop yield prediction using deep learning technology have been conducted. These algorithms have difficulty constructing a linear map between input data sets and crop prediction results. Furthermore, implementation of these algorithms positively depends on the rate of acquired attributes. Deep reinforcement learning can overcome these limitations. This paper analyzes the performance of DQN, Double DQN and Dueling DQN to improve crop yield prediction. The DQN algorithm retains the overestimation problem. Whereas, Double DQN declines the over-estimations and leads to getting better results. The proposed models achieves these by reducing the falsehood and increasing the prediction exactness.

A Study on Application of Reinforcement Learning Algorithm Using Pixel Data (픽셀 데이터를 이용한 강화 학습 알고리즘 적용에 관한 연구)

  • Moon, Saemaro;Choi, Yonglak
    • Journal of Information Technology Services
    • /
    • v.15 no.4
    • /
    • pp.85-95
    • /
    • 2016
  • Recently, deep learning and machine learning have attracted considerable attention and many supporting frameworks appeared. In artificial intelligence field, a large body of research is underway to apply the relevant knowledge for complex problem-solving, necessitating the application of various learning algorithms and training methods to artificial intelligence systems. In addition, there is a dearth of performance evaluation of decision making agents. The decision making agent that can find optimal solutions by using reinforcement learning methods designed through this research can collect raw pixel data observed from dynamic environments and make decisions by itself based on the data. The decision making agent uses convolutional neural networks to classify situations it confronts, and the data observed from the environment undergoes preprocessing before being used. This research represents how the convolutional neural networks and the decision making agent are configured, analyzes learning performance through a value-based algorithm and a policy-based algorithm : a Deep Q-Networks and a Policy Gradient, sets forth their differences and demonstrates how the convolutional neural networks affect entire learning performance when using pixel data. This research is expected to contribute to the improvement of artificial intelligence systems which can efficiently find optimal solutions by using features extracted from raw pixel data.