• Title/Summary/Keyword: Learning Navigation

Search Result 358, Processing Time 0.028 seconds

Deep Reinforcement Learning in ROS-based autonomous robot navigation

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.47-49
    • /
    • 2022
  • Robot navigation has seen a major improvement since the the rediscovery of the potential of Artificial Intelligence (AI) and the attention it has garnered in research circles. A notable achievement in the area was Deep Learning (DL) application in computer vision with outstanding daily life applications such as face-recognition, object detection, and more. However, robotics in general still depend on human inputs in certain areas such as localization, navigation, etc. In this paper, we propose a study case of robot navigation based on deep reinforcement technology. We look into the benefits of switching from traditional ROS-based navigation algorithms towards machine learning approaches and methods. We describe the state-of-the-art technology by introducing the concepts of Reinforcement Learning (RL), Deep Learning (DL) and DRL before before focusing on visual navigation based on DRL. The case study preludes further real life deployment in which mobile navigational agent learns to navigate unbeknownst areas.

  • PDF

Mapless Navigation with Distributional Reinforcement Learning (분포형 강화학습을 활용한 맵리스 네비게이션)

  • Van Manh Tran;Gon-Woo Kim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.92-97
    • /
    • 2024
  • This paper provides a study of distributional perspective on reinforcement learning for application in mobile robot navigation. Mapless navigation algorithms based on deep reinforcement learning are proven to promising performance and high applicability. The trial-and-error simulations in virtual environments are encouraged to implement autonomous navigation due to expensive real-life interactions. Nevertheless, applying the deep reinforcement learning model in real tasks is challenging due to dissimilar data collection between virtual simulation and the physical world, leading to high-risk manners and high collision rate. In this paper, we present distributional reinforcement learning architecture for mapless navigation of mobile robot that adapt the uncertainty of environmental change. The experimental results indicate the superior performance of distributional soft actor critic compared to conventional methods.

A Study on Deep Reinforcement Learning Framework for DME Pulse Design

  • Lee, Jungyeon;Kim, Euiho
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.10 no.2
    • /
    • pp.113-120
    • /
    • 2021
  • The Distance Measuring Equipment (DME) is a ground-based aircraft navigation system and is considered as an infrastructure that ensures resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS) outage. The main problem of DME as a GNSS back up is a poor positioning accuracy that often reaches over 100 m. In this paper, a novel approach of applying deep reinforcement learning to a DME pulse design is introduced to improve the DME distance measuring accuracy. This method is designed to develop multipath-resistant DME pulses that comply with current DME specifications. In the research, a Markov Decision Process (MDP) for DME pulse design is set using pulse shape requirements and a timing error. Based on the designed MDP, we created an Environment called PulseEnv, which allows the agent representing a DME pulse shape to explore continuous space using the Soft Actor Critical (SAC) reinforcement learning algorithm.

Region-based Q- learning For Autonomous Mobile Robot Navigation (자율 이동 로봇의 주행을 위한 영역 기반 Q-learning)

  • 차종환;공성학;서일홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.174-174
    • /
    • 2000
  • Q-learning, based on discrete state and action space, is a most widely used reinforcement Learning. However, this requires a lot of memory and much time for learning all actions of each state when it is applied to a real mobile robot navigation using continuous state and action space Region-based Q-learning is a reinforcement learning method that estimates action values of real state by using triangular-type action distribution model and relationship with its neighboring state which was defined and learned before. This paper proposes a new Region-based Q-learning which uses a reward assigned only when the agent reached the target, and get out of the Local optimal path with adjustment of random action rate. If this is applied to mobile robot navigation, less memory can be used and robot can move smoothly, and optimal solution can be learned fast. To show the validity of our method, computer simulations are illusrated.

  • PDF

A Design of The Tailored Learning Navigation based on The Learning Pattern of Learner (학습자의 학습 패턴을 통한 맞춤형 학습 내비게이션 설계)

  • Jeong, Hwa-Young
    • Journal of Internet Computing and Services
    • /
    • v.9 no.6
    • /
    • pp.109-115
    • /
    • 2008
  • A lot of methods to improve the learning effect of learners in E-learning have been researched and applied. In roost E-learning systems, the learning navigation presenting the learning course and progress to learners is applied. But roost learning course and progress are designed by teacher beforehand and learners study the learning course and progress already fixed. In this research, a learning navigation which can change the learning course and progress dynamically according to learner's learning effect is presented. For this purpose, the factors which define the learning course and progress by learning chapters, contents and item difficulties were classified and each process logic was analyzed through CSP.

  • PDF

Collective Navigation Through a Narrow Gap for a Swarm of UAVs Using Curriculum-Based Deep Reinforcement Learning (커리큘럼 기반 심층 강화학습을 이용한 좁은 틈을 통과하는 무인기 군집 내비게이션)

  • Myong-Yol Choi;Woojae Shin;Minwoo Kim;Hwi-Sung Park;Youngbin You;Min Lee;Hyondong Oh
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.117-129
    • /
    • 2024
  • This paper introduces collective navigation through a narrow gap using a curriculum-based deep reinforcement learning algorithm for a swarm of unmanned aerial vehicles (UAVs). Collective navigation in complex environments is essential for various applications such as search and rescue, environment monitoring and military tasks operations. Conventional methods, which are easily interpretable from an engineering perspective, divide the navigation tasks into mapping, planning, and control; however, they struggle with increased latency and unmodeled environmental factors. Recently, learning-based methods have addressed these problems by employing the end-to-end framework with neural networks. Nonetheless, most existing learning-based approaches face challenges in complex scenarios particularly for navigating through a narrow gap or when a leader or informed UAV is unavailable. Our approach uses the information of a certain number of nearest neighboring UAVs and incorporates a task-specific curriculum to reduce learning time and train a robust model. The effectiveness of the proposed algorithm is verified through an ablation study and quantitative metrics. Simulation results demonstrate that our approach outperforms existing methods.

VR-Based Navigation Simulator Using VRML

  • Yim, Jeong-Bin
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2001.10a
    • /
    • pp.121-140
    • /
    • 2001
  • - We explored the application of VR technologies to implement VR-Based Navigation Simulator Using VRML. - VR Ship Simulator gave some useful functions such as maneuvering ship with ease of learning the International Conventions to Preventing Collisions at sea. - AtoN Simulator can give attractive and interesting experiences with ease of learning the rules of IALA system and ease of comprehension of Aids to Navigation characteristics in various weather conditions - Results from tests, it became apparent that the developed VR-based Navigation Simulator could be adequate to next generation ship simulator.

  • PDF

A biologically inspired model based on a multi-scale spatial representation for goal-directed navigation

  • Li, Weilong;Wu, Dewei;Du, Jia;Zhou, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1477-1491
    • /
    • 2017
  • Inspired by the multi-scale nature of hippocampal place cells, a biologically inspired model based on a multi-scale spatial representation for goal-directed navigation is proposed in order to achieve robotic spatial cognition and autonomous navigation. First, a map of the place cells is constructed in different scales, which is used for encoding the spatial environment. Then, the firing rate of the place cells in each layer is calculated by the Gaussian function as the input of the Q-learning process. The robot decides on its next direction for movement through several candidate actions according to the rules of action selection. After several training trials, the robot can accumulate experiential knowledge and thus learn an appropriate navigation policy to find its goal. The results in simulation show that, in contrast to the other two methods(G-Q, S-Q), the multi-scale model presented in this paper is not only in line with the multi-scale nature of place cells, but also has a faster learning potential to find the optimized path to the goal. Additionally, this method also has a good ability to complete the goal-directed navigation task in large space and in the environments with obstacles.

Scorm-based Sequencing & Navigation Model for Collaborative Learning (Scorm 기반 협력학습을 위한 시퀀싱 & 네비게이션 모델)

  • Doo, Chang-Ho;Lee, Jun-Seok
    • Journal of Digital Convergence
    • /
    • v.10 no.6
    • /
    • pp.189-196
    • /
    • 2012
  • In this paper, we propose a Scorm-based Sequencing & Navigation Model for Collaborative Learning. It is an e-Learning process control model that is used to efficiently and graphically defining Scorm's content aggregation model and its sequencing prerequistites through a formal approach. To define a process based model uses the expanded ICN(Information Control Net) model. which is called SCOSNCN(SCO Sequencing & Navigation Control Net). We strongly believe that the process-driven model delivers a way of much more convenient content aggregating work and system, in terms of not only defining the intended sequence and ordering of learning activities, but also building the runtime environment for sequencing and navigation of learning activities and experiences.

Reward Shaping for a Reinforcement Learning Method-Based Navigation Framework

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.9-11
    • /
    • 2022
  • Applying Reinforcement Learning in everyday applications and varied environments has proved the potential of the of the field and revealed pitfalls along the way. In robotics, a learning agent takes over gradually the control of a robot by abstracting the navigation model of the robot with its inputs and outputs, thus reducing the human intervention. The challenge for the agent is how to implement a feedback function that facilitates the learning process of an MDP problem in an environment while reducing the time of convergence for the method. In this paper we will implement a reward shaping system avoiding sparse rewards which gives fewer data for the learning agent in a ROS environment. Reward shaping prioritizes behaviours that brings the robot closer to the goal by giving intermediate rewards and helps the algorithm converge quickly. We will use a pseudocode implementation as an illustration of the method.

  • PDF