• 제목/요약/키워드: DDQN

검색결과 6건 처리시간 0.015초

다중 에이전트 강화학습을 이용한 RC보 최적설계 기술개발 (Development of Optimal Design Technique of RC Beam using Multi-Agent Reinforcement Learning)

  • 강주원;김현수
    • 한국공간구조학회논문집
    • /
    • 제23권2호
    • /
    • pp.29-36
    • /
    • 2023
  • Reinforcement learning (RL) is widely applied to various engineering fields. Especially, RL has shown successful performance for control problems, such as vehicles, robotics, and active structural control system. However, little research on application of RL to optimal structural design has conducted to date. In this study, the possibility of application of RL to structural design of reinforced concrete (RC) beam was investigated. The example of RC beam structural design problem introduced in previous study was used for comparative study. Deep q-network (DQN) is a famous RL algorithm presenting good performance in the discrete action space and thus it was used in this study. The action of DQN agent is required to represent design variables of RC beam. However, the number of design variables of RC beam is too many to represent by the action of conventional DQN. To solve this problem, multi-agent DQN was used in this study. For more effective reinforcement learning process, DDQN (Double Q-Learning) that is an advanced version of a conventional DQN was employed. The multi-agent of DDQN was trained for optimal structural design of RC beam to satisfy American Concrete Institute (318) without any hand-labeled dataset. Five agents of DDQN provides actions for beam with, beam depth, main rebar size, number of main rebar, and shear stirrup size, respectively. Five agents of DDQN were trained for 10,000 episodes and the performance of the multi-agent of DDQN was evaluated with 100 test design cases. This study shows that the multi-agent DDQN algorithm can provide successfully structural design results of RC beam.

Resource Allocation Strategy of Internet of Vehicles Using Reinforcement Learning

  • Xi, Hongqi;Sun, Huijuan
    • Journal of Information Processing Systems
    • /
    • 제18권3호
    • /
    • pp.443-456
    • /
    • 2022
  • An efficient and reasonable resource allocation strategy can greatly improve the service quality of Internet of Vehicles (IoV). However, most of the current allocation methods have overestimation problem, and it is difficult to provide high-performance IoV network services. To solve this problem, this paper proposes a network resource allocation strategy based on deep learning network model DDQN. Firstly, the method implements the refined modeling of IoV model, including communication model, user layer computing model, edge layer offloading model, mobile model, etc., similar to the actual complex IoV application scenario. Then, the DDQN network model is used to calculate and solve the mathematical model of resource allocation. By decoupling the selection of target Q value action and the calculation of target Q value, the phenomenon of overestimation is avoided. It can provide higher-quality network services and ensure superior computing and processing performance in actual complex scenarios. Finally, simulation results show that the proposed method can maintain the network delay within 65 ms and show excellent network performance in high concurrency and complex scenes with task data volume of 500 kbits.

이중 심층 Q 네트워크 기반 장애물 회피 경로 계획 (Path Planning with Obstacle Avoidance Based on Double Deep Q Networks)

  • 자오 용지앙;첸센폰;성승제;허정규;임창균
    • 한국전자통신학회논문지
    • /
    • 제18권2호
    • /
    • pp.231-240
    • /
    • 2023
  • 심층 강화 학습(Deep Reinforcement Learning)을 사용한 경로 계획에서 장애물을 자동으로 회피하기 위해 로봇을 학습시키는 일은 쉬운 일이 아니다. 많은 연구자가 DRL을 사용하여 주어진 환경에서 로봇 학습을 통해 장애물 회피하여 경로 계획을 수립하려는 가능성을 시도하였다. 그러나 다양한 환경에서 로봇과 장착된 센서의 오는 다양한 요인 때문에 주어진 시나리오에서 로봇이 모든 장애물을 완전히 회피하여 이동하는 것을 실현하는 일은 흔치 않다. 이러한 문제 해결의 가능성과 장애물을 회피 경로 계획 실험을 위해 테스트베드를 만들었고 로봇에 카메라를 장착하였다. 이 로봇의 목표는 가능한 한 빨리 벽과 장애물을 피해 시작점에서 끝점까지 도달하는 것이다. 본 논문에서는 벽과 장애물을 회피하기 위한 DRL의 가능성을 검증하기 위해 이중 심층 Q 네트워크(DDQN)를 제안하였다. 실험에 사용된 로봇은 Jetbot이며 자동화된 경로 계획에서 장애물 회피가 필요한 일부 로봇 작업 시나리오에 적용할 수 있을 것이다.

Application of Reinforcement Learning in Detecting Fraudulent Insurance Claims

  • Choi, Jung-Moon;Kim, Ji-Hyeok;Kim, Sung-Jun
    • International Journal of Computer Science & Network Security
    • /
    • 제21권9호
    • /
    • pp.125-131
    • /
    • 2021
  • Detecting fraudulent insurance claims is difficult due to small and unbalanced data. Some research has been carried out to better cope with various types of fraudulent claims. Nowadays, technology for detecting fraudulent insurance claims has been increasingly utilized in insurance and technology fields, thanks to the use of artificial intelligence (AI) methods in addition to traditional statistical detection and rule-based methods. This study obtained meaningful results for a fraudulent insurance claim detection model based on machine learning (ML) and deep learning (DL) technologies, using fraudulent insurance claim data from previous research. In our search for a method to enhance the detection of fraudulent insurance claims, we investigated the reinforcement learning (RL) method. We examined how we could apply the RL method to the detection of fraudulent insurance claims. There are limited previous cases of applying the RL method. Thus, we first had to define the RL essential elements based on previous research on detecting anomalies. We applied the deep Q-network (DQN) and double deep Q-network (DDQN) in the learning fraudulent insurance claim detection model. By doing so, we confirmed that our model demonstrated better performance than previous machine learning models.

Advantage Actor-Critic 강화학습 기반 수중운동체의 롤 제어 (Roll control of Underwater Vehicle based Reinforcement Learning using Advantage Actor-Critic)

  • 이병준
    • 한국군사과학기술학회지
    • /
    • 제24권1호
    • /
    • pp.123-132
    • /
    • 2021
  • In order for the underwater vehicle to perform various tasks, it is important to control the depth, course, and roll of the underwater vehicle. To design such a controller, it is necessary to construct a dynamic model of the underwater vehicle and select the appropriate hydrodynamic coefficients. For the controller design, since the dynamic model is linearized assuming a limited operating range, the control performance in the steady state is well satisfied, but the control performance in the transient state may be unstable. In this paper, in order to overcome the problems of the existing controller design, we propose a A2C(Advantage Actor-Critic) based roll controller for underwater vehicle with stable learning performance in a continuous space among reinforcement learning methods that can be learned through rewards for actions. The performance of the proposed A2C based roll controller is verified through simulation and compared with PID and Dueling DDQN based roll controllers.

PGA: An Efficient Adaptive Traffic Signal Timing Optimization Scheme Using Actor-Critic Reinforcement Learning Algorithm

  • Shen, Si;Shen, Guojiang;Shen, Yang;Liu, Duanyang;Yang, Xi;Kong, Xiangjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4268-4289
    • /
    • 2020
  • Advanced traffic signal timing method plays very important role in reducing road congestion and air pollution. Reinforcement learning is considered as superior approach to build traffic light timing scheme by many recent studies. It fulfills real adaptive control by the means of taking real-time traffic information as state, and adjusting traffic light scheme as action. However, existing works behave inefficient in complex intersections and they are lack of feasibility because most of them adopt traffic light scheme whose phase sequence is flexible. To address these issues, a novel adaptive traffic signal timing scheme is proposed. It's based on actor-critic reinforcement learning algorithm, and advanced techniques proximal policy optimization and generalized advantage estimation are integrated. In particular, a new kind of reward function and a simplified form of state representation are carefully defined, and they facilitate to improve the learning efficiency and reduce the computational complexity, respectively. Meanwhile, a fixed phase sequence signal scheme is derived, and constraint on the variations of successive phase durations is introduced, which enhances its feasibility and robustness in field applications. The proposed scheme is verified through field-data-based experiments in both medium and high traffic density scenarios. Simulation results exhibit remarkable improvement in traffic performance as well as the learning efficiency comparing with the existing reinforcement learning-based methods such as 3DQN and DDQN.