• Title/Summary/Keyword: 심층 강화학습

Search Result 108, Processing Time 0.024 seconds

Goal Oriented Dialogue System Based on Deep Recurrent Q Network (심층 순환 Q 네트워크 기반 목적 지향 대화 시스템)

  • Park, Geonwoo;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.147-150
    • /
    • 2018
  • 목적 지향 대화 시스템은 자연어 이해, 대화 관리자, 자연어 생성과 같은 세분화 모델들의 결합으로 이루어져있어 하위 모델에 대한 오류 전파에 취약하다. 이러한 문제점을 해결하기 위해 자연어 이해 모델과 대화 관리자를 하나의 네트워크로 구성하고 오류에 강건한 심층 Q 네트워크를 제안한다. 본 논문에서는 대화의 전체 흐름을 파악 할 수 있는 순환 신경망인 LSTM에 심층 Q 네트워크 적용한 심층 순환 Q 네트워크 기반 목적 지향 대화 시스템을 제안한다. 실험 결과, 제안한 심층 순환 Q 네트워크는 LSTM, 심층 Q 네트워크보다 각각 정밀도 1.0%p, 6.7%p 높은 성능을 보였다.

  • PDF

A Queue Management Mechanism for Service groups based on Deep Reinforcement Learning (심층강화학습 기반 서비스 그룹별 큐 관리 메커니즘)

  • Jung, Seol-Ryung;Lee, Sung-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1099-1104
    • /
    • 2020
  • In order to provide various types of application services based on the Internet, it is ideal to guarantee the quality of service(QoS) for each flow. However, realizing these ideas is not an easy task.. It is effective to classify multiple flows having the same or similar service quality requirements into same group, and to provide service quality for each group. The queue management mechanism in the router plays a very important role in order to efficiently transmit data and to support differentiated quality of service for each service. In order to efficiently support various multimedia services, an intelligent and adaptive queue management mechanism is required. This paper proposes an intelligent queue management mechanism based on deep reinforcement learning that decides whether to deliver packets for each group based on the traffic information of each flow group flowing in for a certain period of time and the current network state information.

Neural Architecture Search for Korean Text Classification (한국어 문서 분류를 위한 신경망 구조 탐색)

  • ByoungKyu Ji
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.125-130
    • /
    • 2023
  • 최근 심층 신경망을 활용한 한국어 자연어 처리에 대한 관심이 높아지고 있지만, 한국어 자연어 처리에 적합한 신경망 구조 탐색에 대한 연구는 이뤄지지 않았다. 본 논문에서는 문서 분류 정확도를 보상으로 하는 강화 학습 알고리즘을 이용하여 장단기 기억 신경망으로 한국어 문서 분류에 적합한 심층 신경망 구조를 탐색하였으며, 탐색을 위해 사전 학습한 한국어 임베딩 성능과 탐색한 신경망 구조를 분석하였다. 탐색을 통해 찾아낸 신경망 구조는 기존 한국어 자연어 처리 모델에 대해 4 가지 한국어 문서 분류 과제로 비교하였을 때 일반적으로 성능이 우수하고 모델의 크기가 작아 효율적이었다.

  • PDF

Uncertainty Sequence Modeling Approach for Safe and Effective Autonomous Driving (안전하고 효과적인 자율주행을 위한 불확실성 순차 모델링)

  • Yoon, Jae Ung;Lee, Ju Hong
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.9-20
    • /
    • 2022
  • Deep reinforcement learning(RL) is an end-to-end data-driven control method that is widely used in the autonomous driving domain. However, conventional RL approaches have difficulties in applying it to autonomous driving tasks due to problems such as inefficiency, instability, and uncertainty. These issues play an important role in the autonomous driving domain. Although recent studies have attempted to solve these problems, they are computationally expensive and rely on special assumptions. In this paper, we propose a new algorithm MCDT that considers inefficiency, instability, and uncertainty by introducing a method called uncertainty sequence modeling to autonomous driving domain. The sequence modeling method, which views reinforcement learning as a decision making generation problem to obtain high rewards, avoids the disadvantages of exiting studies and guarantees efficiency, stability and also considers safety by integrating uncertainty estimation techniques. The proposed method was tested in the OpenAI Gym CarRacing environment, and the experimental results show that the MCDT algorithm provides efficient, stable and safe performance compared to the existing reinforcement learning method.

Grasping a Target Object in Clutter with an Anthropomorphic Robot Hand via RGB-D Vision Intelligence, Target Path Planning and Deep Reinforcement Learning (RGB-D 환경인식 시각 지능, 목표 사물 경로 탐색 및 심층 강화학습에 기반한 사람형 로봇손의 목표 사물 파지)

  • Ryu, Ga Hyeon;Oh, Ji-Heon;Jeong, Jin Gyun;Jung, Hwanseok;Lee, Jin Hyuk;Lopez, Patricio Rivera;Kim, Tae-Seong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.363-370
    • /
    • 2022
  • Grasping a target object among clutter objects without collision requires machine intelligence. Machine intelligence includes environment recognition, target & obstacle recognition, collision-free path planning, and object grasping intelligence of robot hands. In this work, we implement such system in simulation and hardware to grasp a target object without collision. We use a RGB-D image sensor to recognize the environment and objects. Various path-finding algorithms been implemented and tested to find collision-free paths. Finally for an anthropomorphic robot hand, object grasping intelligence is learned through deep reinforcement learning. In our simulation environment, grasping a target out of five clutter objects, showed an average success rate of 78.8%and a collision rate of 34% without path planning. Whereas our system combined with path planning showed an average success rate of 94% and an average collision rate of 20%. In our hardware environment grasping a target out of three clutter objects showed an average success rate of 30% and a collision rate of 97% without path planning whereas our system combined with path planning showed an average success rate of 90% and an average collision rate of 23%. Our results show that grasping a target object in clutter is feasible with vision intelligence, path planning, and deep RL.

Prediction Technique of Energy Consumption based on Reinforcement Learning in Microgrids (마이크로그리드에서 강화학습 기반 에너지 사용량 예측 기법)

  • Sun, Young-Ghyu;Lee, Jiyoung;Kim, Soo-Hyun;Kim, Soohwan;Lee, Heung-Jae;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.175-181
    • /
    • 2021
  • This paper analyzes the artificial intelligence-based approach for short-term energy consumption prediction. In this paper, we employ the reinforcement learning algorithms to improve the limitation of the supervised learning algorithms which usually utilize to the short-term energy consumption prediction technologies. The supervised learning algorithm-based approaches have high complexity because the approaches require contextual information as well as energy consumption data for sufficient performance. We propose a deep reinforcement learning algorithm based on multi-agent to predict energy consumption only with energy consumption data for improving the complexity of data and learning models. The proposed scheme is simulated using public energy consumption data and confirmed the performance. The proposed scheme can predict a similar value to the actual value except for the outlier data.

Zero-shot Text Classification based on Reinforced Learning (강화학습 기반의 제로샷 텍스트 분류)

  • Zhang Songming;Inwhee Joe
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.439-441
    • /
    • 2023
  • 전통적인 텍스트 분류 방법은 상당량의 라벨링된 데이터와 미리 정의된 클래스가 필요해서 그 적용성과 확장성이 제한된다. 그래서 이런 한계를 극복하기 위해 제로샷 러닝(Zero-shot Learning)이 등장했다. 텍스트 분류 분야에서 제로샷 텍스트 분류는 모델이 대상 클래스의 샘플을 미리 접하지 않고도 인스턴스를 분류할 수 있도록 하는 중요한 주제이다. 이 문제를 해결하기 위해 정책 네트워크를 활용한 심층 강화 학습(DRL) 기반 접근법을 제안한다. 이러한 방법을 통해 모델이 새로운 의미 공간에 효과적으로 적응하면서, 다른 모델들과 비교하여 제로샷 텍스트 분류의 정확도를 향상시킬 수 있었다. XLM-R 과 비교하면 최대 15.9%의 정확도 향상이 나타났다.

Deep Reinforcement Learning-Based C-V2X Distributed Congestion Control for Real-Time Vehicle Density Response (실시간 차량 밀도에 대응하는 심층강화학습 기반 C-V2X 분산혼잡제어)

  • Byeong Cheol Jeon;Woo Yoel Yang;Han-Shin Jo
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.379-385
    • /
    • 2023
  • Distributed congestion control (DCC) is a technology that mitigates channel congestion and improves communication performance in high-density vehicular networks. Traditional DCC techniques operate to reduce channel congestion without considering quality of service (QoS) requirements. Such design of DCC algorithms can lead to excessive DCC actions, potentially degrading other aspects of QoS. To address this issue, we propose a deep reinforcement learning-based QoS-adaptive DCC algorithm. The simulation was conducted using a quasi-real environment simulator, generating dynamic vehicular densities for evaluation. The simulation results indicate that our proposed DCC algorithm achieves results closer to the targeted QoS compared to existing DCC algorithms.

Machine Scheduling Models Based on Reinforcement Learning for Minimizing Due Date Violation and Setup Change (납기 위반 및 셋업 최소화를 위한 강화학습 기반의 설비 일정계획 모델)

  • Yoo, Woosik;Seo, Juhyeok;Kim, Dahee;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.3
    • /
    • pp.19-33
    • /
    • 2019
  • Recently, manufacturers have been struggling to efficiently use production equipment as their production methods become more sophisticated and complex. Typical factors hindering the efficiency of the manufacturing process include setup cost due to job change. Especially, in the process of using expensive production equipment such as semiconductor / LCD process, efficient use of equipment is very important. Balancing the tradeoff between meeting the deadline and minimizing setup cost incurred by changes of work type is crucial planning task. In this study, we developed a scheduling model to achieve the goal of minimizing the duedate and setup costs by using reinforcement learning in parallel machines with duedate and work preparation costs. The proposed model is a Deep Q-Network (DQN) scheduling model and is a reinforcement learning-based model. To validate the effectiveness of our proposed model, we compared it against the heuristic model and DNN(deep neural network) based model. It was confirmed that our proposed DQN method causes less due date violation and setup costs than the benchmark methods.

Study of Deep Reinforcement Learning-Based Agents for Controlled Flight into Terrain (CFIT) Autonomous Avoidance (CFIT 자율 회피를 위한 심층강화학습 기반 에이전트 연구)

  • Lee, Yong Won;Yoo, Jae Leame
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.30 no.2
    • /
    • pp.34-43
    • /
    • 2022
  • In Efforts to prevent CFIT accidents so far, have been emphasizing various education measures to minimize the occurrence of human errors, as well as enforcement measures. However, current engineering measures remain in a system (TAWS) that gives warnings before colliding with ground or obstacles, and even actual automatic avoidance maneuvers are not implemented, which has limitations that cannot prevent accidents caused by human error. Currently, various attempts are being made to apply machine learning-based artificial intelligence agent technologies to the aviation safety field. In this paper, we propose a deep reinforcement learning-based artificial intelligence agent that can recognize CFIT situations and control aircraft to avoid them in the simulation environment. It also describes the composition of the learning environment, process, and results, and finally the experimental results using the learned agent. In the future, if the results of this study are expanded to learn the horizontal and vertical terrain radar detection information and camera image information of radar in addition to the terrain database, it is expected that it will become an agent capable of performing more robust CFIT autonomous avoidance.