• Title/Summary/Keyword: Learning Navigation

Search Result 358, Processing Time 0.031 seconds

Dynamic Window Approach with path-following for Unmanned Surface Vehicle based on Reinforcement Learning (무인수상정 경로점 추종을 위한 강화학습 기반 Dynamic Window Approach)

  • Heo, Jinyeong;Ha, Jeesoo;Lee, Junsik;Ryu, Jaekwan;Kwon, Yongjin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.1
    • /
    • pp.61-69
    • /
    • 2021
  • Recently, autonomous navigation technology is actively being developed due to the increasing demand of an unmanned surface vehicle(USV). Local planning is essential for the USV to safely reach its destination along paths. the dynamic window approach(DWA) algorithm is a well-known navigation scheme as a local path planning. However, the existing DWA algorithm does not consider path line tracking, and the fixed weight coefficient of the evaluation function, which is a core part, cannot provide flexible path planning for all situations. Therefore, in this paper, we propose a new DWA algorithm that can follow path lines in all situations. Fixed weight coefficients were trained using reinforcement learning(RL) which has been actively studied recently. We implemented the simulation and compared the existing DWA algorithm with the DWA algorithm proposed in this paper. As a result, we confirmed the effectiveness of the proposed algorithm.

Comparison of Deep Learning Networks in Voice-Guided System for The Blind (시각장애인을 위한 음성안내 네비게이션 시스템의 심층신경망 성능 비교)

  • An, Ryun-Hui;Um, Sung-Ho;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.175-177
    • /
    • 2022
  • This paper introduces a system that assists the blind to move to their destination, and compares the performance of 3-types of deep learning network (DNN) used in the system. The system is made up with a smartphone application that finds route from current location to destination using GPS and navigation API and a bus station installation module that recognizes and informs the bus (type and number) being about the board at bus stop using 3-types of DNN and bus information API. To make the module recognize bus number to get on, We adopted faster-RCNN, YOLOv4, YOLOv5s and YOLOv5s showed best performance in accuracy and speed.

  • PDF

Optimal route generation method for ships using reinforcement learning (강화학습을 이용한 선박의 최적항로 생성기법)

  • Min-Kyu Kim;Jong-Hwa Kim;Ik-Soon Choi;Hyeong-Tak Lee;Hyun Yang
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.167-168
    • /
    • 2022
  • 선박을 운항함에 있어 최적항로를 결정하는 것은 항해시간과 연료 소모를 줄이는 중요한 요인 중의 하나이다. 기존에는 항로를 결정하기 위해 항해사의 전문적인 지식이 요구되지만 이러한 방법은 최적의 항로라고 판단하기 어렵다. 따라서 연료비 절감과 선박의 안전을 고려한 최적의 항로를 생성할 필요가 있다. 연료 소모량 혹은 항해시간을 최소화하기 위해서 에이스타 알고리즘, Dijkstra 알고리즘을 적용한 연구가 있다. 하지만 이러한 연구들은 최단거리만 구할 뿐 선박의 안전, 해상상태 등을 고려하지 못한다. 이를 보완하기 위해 본 연구에서는 강화학습 알고리즘을 적용하고자한다. 강화학습 알고리즘은 앞으로 누적 될 보상을 최대화 하는 행동으로 정책을 찾는 방법으로, 본 연구에서는 강화학습 알고리즘의 하나인 Q-learning을 사용하여 선박의 안전을 고려한 최적의 항로를 생성하는 기법을 제안 하고자 한다.

  • PDF

Study on Weather Data Interpolation of a Buoy Based on Machine Learning Techniques (기계 학습을 이용한 항로표지 기상 자료의 보간에 관한 연구)

  • Seong-Hun Jeong;Jun-Ik Ma;Seong-Hyun Jo;Gi-Ryun Lim;Jun-Woo Lee;Jun-Hee Han
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.72-74
    • /
    • 2022
  • Several types of data are collected from buoy due to the development of hardware technology.. However, the collected data are difficult to use due to errors including missing values and outliers depending on mechanical faults and meteorological environment. Therefore, in this study, linear interpolation is performed by adding the missing time data to enable machine learning to the insufficient meteorological data. After the linear interpolation, XGBoost and KNN-regressor, are used to forecast error data and suggested model is evaluated by using real-world data of a buoy.

  • PDF

Simple Kinematic Model Generation by Learning Control Inputs and Velocity Outputs of a Ship (선박의 제어 입력과 속도 출력 학습에 의한 단순 운동학 모델 생성)

  • Kim, Dong Jin;Yun, Kunhang
    • Journal of Navigation and Port Research
    • /
    • v.45 no.6
    • /
    • pp.284-297
    • /
    • 2021
  • A simple kinematic model for the prediction of ship manoeuvres based on trial data is proposed in this study. The model consists of first order differential equations in surge, sway, and yaw directions which simulate the time series of each velocity component. Actually instead of sea trial data, dynamic model simulations are conducted with randomly varied control inputs such as propeller revolution rates and rudder angles. Based on learning of control inputs and velocity outputs of dynamic model simulations in sufficient time, kinematic model coefficients are optimized so that the kinematic model can be approximately reproduce the velocity outputs of dynamic model simulations with arbitrary control inputs. The resultant kinematic model is verified with new dynamic simulation sets.

An Improved Domain-Knowledge-based Reinforcement Learning Algorithm

  • Jang, Si-Young;Suh, Il-Hong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1309-1314
    • /
    • 2003
  • If an agent has a learning ability using previous knowledge, then it is expected that the agent can speed up learning by interacting with environment. In this paper, we present an improved reinforcement learning algorithm using domain knowledge which can be represented by problem-independent features and their classifiers. Here, neural networks are employed as knowledge classifiers. To show the validity of our proposed algorithm, computer simulations are illustrated, where navigation problem of a mobile robot and a micro aerial vehicle(MAV) are considered.

  • PDF

Reinforcement Leaming Using a State Partition Method under Real Environment

  • Saito, Ken;Masuda, Shiro;Yamaguchi, Toru
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.66-69
    • /
    • 2003
  • This paper considers a reinforcement learning(RL) which deals with real environments. Most reinforcement learning studies have been made by simulations because real-environment learning requires large computational cost and much time. Furthermore, it is more difficult to acquire many rewards efficiently in real environments than in virtual ones. The most important requirement to make real-environment learning successful is the appropriate construction of the state space. In this paper, to begin with, I show the basic overview of the reinforcement learning under real environments. Next, 1 introduce a state-space construction method under real environmental which is State Partition Method. Finally I apply this method to a robot navigation problem and compare it with conventional methods.

  • PDF

Underwater Acoustic Research Trends with Machine Learning: Passive SONAR Applications

  • Yang, Haesang;Lee, Keunhwa;Choo, Youngmin;Kim, Kookhyun
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.3
    • /
    • pp.227-236
    • /
    • 2020
  • Underwater acoustics, which is the domain that addresses phenomena related to the generation, propagation, and reception of sound waves in water, has been applied mainly in the research on the use of sound navigation and ranging (SONAR) systems for underwater communication, target detection, investigation of marine resources and environment mapping, and measurement and analysis of sound sources in water. The main objective of remote sensing based on underwater acoustics is to indirectly acquire information on underwater targets of interest using acoustic data. Meanwhile, highly advanced data-driven machine-learning techniques are being used in various ways in the processes of acquiring information from acoustic data. The related theoretical background is introduced in the first part of this paper (Yang et al., 2020). This paper reviews machine-learning applications in passive SONAR signal-processing tasks including target detection/identification and localization.

DYNAMIC ROUTE PLANNING BY Q-LEARNING -Cellular Automation Based Simulator and Control

  • Sano, Masaki;Jung, Si
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.24.2-24
    • /
    • 2001
  • In this paper, the authors present a row dynamic route planning by Q-learning. The proposed algorithm is executed in a cellular automation based traffic simulator, which is also newly created. In Vehicle Information and Communication System(VICS), which is an active field of Intelligent Transport System(ITS), information of traffic congestion is sent to each vehicle at real time. However, a centralized navigation system is not realistic to guide millions of vehicles in a megalopolis. Autonomous distributed systems should be more flexible and scalable, and also have a chance to focus on each vehicles demand. In such systems, each vehicle can search an own optimal route. We employ Q-learning of the reinforcement learning method to search an optimal or sub-optimal route, in which route drivers can avoid traffic congestions. We find some applications of the reinforcement learning in the "static" environment, but there are ...

  • PDF

Deep Learning Model for Electric Power Demand Prediction Using Special Day Separation and Prediction Elements Extention (특수일 분리와 예측요소 확장을 이용한 전력수요 예측 딥 러닝 모델)

  • Park, Jun-Ho;Shin, Dong-Ha;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.4
    • /
    • pp.365-370
    • /
    • 2017
  • This study analyze correlation between weekdays data and special days data of different power demand patterns, and builds a separate data set, and suggests ways to reduce power demand prediction error by using deep learning network suitable for each data set. In addition, we propose a method to improve the prediction rate by adding the environmental elements and the separating element to the meteorological element, which is a basic power demand prediction elements. The entire data predicted power demand using LSTM which is suitable for learning time series data, and the special day data predicted power demand using DNN. The experiment result show that the prediction rate is improved by adding prediction elements other than meteorological elements. The average RMSE of the entire dataset was 0.2597 for LSTM and 0.5474 for DNN, indicating that the LSTM showed a good prediction rate. The average RMSE of the special day data set was 0.2201 for DNN, indicating that the DNN had better prediction than LSTM. The MAPE of the LSTM of the whole data set was 2.74% and the MAPE of the special day was 3.07 %.