• 제목/요약/키워드: Learning Navigation

검색결과 358건 처리시간 0.025초

Deep Reinforcement Learning in ROS-based autonomous robot navigation

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.47-49
    • /
    • 2022
  • Robot navigation has seen a major improvement since the the rediscovery of the potential of Artificial Intelligence (AI) and the attention it has garnered in research circles. A notable achievement in the area was Deep Learning (DL) application in computer vision with outstanding daily life applications such as face-recognition, object detection, and more. However, robotics in general still depend on human inputs in certain areas such as localization, navigation, etc. In this paper, we propose a study case of robot navigation based on deep reinforcement technology. We look into the benefits of switching from traditional ROS-based navigation algorithms towards machine learning approaches and methods. We describe the state-of-the-art technology by introducing the concepts of Reinforcement Learning (RL), Deep Learning (DL) and DRL before before focusing on visual navigation based on DRL. The case study preludes further real life deployment in which mobile navigational agent learns to navigate unbeknownst areas.

  • PDF

분포형 강화학습을 활용한 맵리스 네비게이션 (Mapless Navigation with Distributional Reinforcement Learning)

  • 짠 반 마잉;김곤우
    • 로봇학회논문지
    • /
    • 제19권1호
    • /
    • pp.92-97
    • /
    • 2024
  • This paper provides a study of distributional perspective on reinforcement learning for application in mobile robot navigation. Mapless navigation algorithms based on deep reinforcement learning are proven to promising performance and high applicability. The trial-and-error simulations in virtual environments are encouraged to implement autonomous navigation due to expensive real-life interactions. Nevertheless, applying the deep reinforcement learning model in real tasks is challenging due to dissimilar data collection between virtual simulation and the physical world, leading to high-risk manners and high collision rate. In this paper, we present distributional reinforcement learning architecture for mapless navigation of mobile robot that adapt the uncertainty of environmental change. The experimental results indicate the superior performance of distributional soft actor critic compared to conventional methods.

A Study on Deep Reinforcement Learning Framework for DME Pulse Design

  • Lee, Jungyeon;Kim, Euiho
    • Journal of Positioning, Navigation, and Timing
    • /
    • 제10권2호
    • /
    • pp.113-120
    • /
    • 2021
  • The Distance Measuring Equipment (DME) is a ground-based aircraft navigation system and is considered as an infrastructure that ensures resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS) outage. The main problem of DME as a GNSS back up is a poor positioning accuracy that often reaches over 100 m. In this paper, a novel approach of applying deep reinforcement learning to a DME pulse design is introduced to improve the DME distance measuring accuracy. This method is designed to develop multipath-resistant DME pulses that comply with current DME specifications. In the research, a Markov Decision Process (MDP) for DME pulse design is set using pulse shape requirements and a timing error. Based on the designed MDP, we created an Environment called PulseEnv, which allows the agent representing a DME pulse shape to explore continuous space using the Soft Actor Critical (SAC) reinforcement learning algorithm.

자율 이동 로봇의 주행을 위한 영역 기반 Q-learning (Region-based Q- learning For Autonomous Mobile Robot Navigation)

  • 차종환;공성학;서일홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.174-174
    • /
    • 2000
  • Q-learning, based on discrete state and action space, is a most widely used reinforcement Learning. However, this requires a lot of memory and much time for learning all actions of each state when it is applied to a real mobile robot navigation using continuous state and action space Region-based Q-learning is a reinforcement learning method that estimates action values of real state by using triangular-type action distribution model and relationship with its neighboring state which was defined and learned before. This paper proposes a new Region-based Q-learning which uses a reward assigned only when the agent reached the target, and get out of the Local optimal path with adjustment of random action rate. If this is applied to mobile robot navigation, less memory can be used and robot can move smoothly, and optimal solution can be learned fast. To show the validity of our method, computer simulations are illusrated.

  • PDF

학습자의 학습 패턴을 통한 맞춤형 학습 내비게이션 설계 (A Design of The Tailored Learning Navigation based on The Learning Pattern of Learner)

  • 정화영
    • 인터넷정보학회논문지
    • /
    • 제9권6호
    • /
    • pp.109-115
    • /
    • 2008
  • 이러닝은 학습자의 학습효과를 높이려는 많은 방법들이 연구 및 적응되고 있다. 대부분의 이러닝 학습은 학습자에게 학습과정을 제시하는 학습 내비게이션을 적용하고 있다. 그러나 일반적으로 교수자가 미리 설계한 학습 진행 및 과정을 제시하고 있으며, 학습자는 정해진 학습과정을 학습하고 있었다. 본 연구에서는 학습 진행 및 과정을 학습자의 학습결과에 따라 유동적으로 변화하여 제시하는 학습 내비게이션을 제시하였다. 이를 위해 학습과정을 결정하는 요인으로 학습단원, 콘텐츠 그리고 난이도로 나뉘었으며, 각 프로세스 로직은 CSP를 통해 분석하였다.

  • PDF

커리큘럼 기반 심층 강화학습을 이용한 좁은 틈을 통과하는 무인기 군집 내비게이션 (Collective Navigation Through a Narrow Gap for a Swarm of UAVs Using Curriculum-Based Deep Reinforcement Learning)

  • 최명열;신우재;김민우;박휘성;유영빈;이민;오현동
    • 로봇학회논문지
    • /
    • 제19권1호
    • /
    • pp.117-129
    • /
    • 2024
  • This paper introduces collective navigation through a narrow gap using a curriculum-based deep reinforcement learning algorithm for a swarm of unmanned aerial vehicles (UAVs). Collective navigation in complex environments is essential for various applications such as search and rescue, environment monitoring and military tasks operations. Conventional methods, which are easily interpretable from an engineering perspective, divide the navigation tasks into mapping, planning, and control; however, they struggle with increased latency and unmodeled environmental factors. Recently, learning-based methods have addressed these problems by employing the end-to-end framework with neural networks. Nonetheless, most existing learning-based approaches face challenges in complex scenarios particularly for navigating through a narrow gap or when a leader or informed UAV is unavailable. Our approach uses the information of a certain number of nearest neighboring UAVs and incorporates a task-specific curriculum to reduce learning time and train a robust model. The effectiveness of the proposed algorithm is verified through an ablation study and quantitative metrics. Simulation results demonstrate that our approach outperforms existing methods.

VR-Based Navigation Simulator Using VRML

  • Yim, Jeong-Bin
    • 한국항해항만학회:학술대회논문집
    • /
    • 한국항해항만학회 2001년도 Proceeding of KIN-CIN Joint Symposium 2001 on Satellite Navigation/AIS, lntelligence , Computer Based Marine Simulation System and VDR
    • /
    • pp.121-140
    • /
    • 2001
  • - We explored the application of VR technologies to implement VR-Based Navigation Simulator Using VRML. - VR Ship Simulator gave some useful functions such as maneuvering ship with ease of learning the International Conventions to Preventing Collisions at sea. - AtoN Simulator can give attractive and interesting experiences with ease of learning the rules of IALA system and ease of comprehension of Aids to Navigation characteristics in various weather conditions - Results from tests, it became apparent that the developed VR-based Navigation Simulator could be adequate to next generation ship simulator.

  • PDF

A biologically inspired model based on a multi-scale spatial representation for goal-directed navigation

  • Li, Weilong;Wu, Dewei;Du, Jia;Zhou, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권3호
    • /
    • pp.1477-1491
    • /
    • 2017
  • Inspired by the multi-scale nature of hippocampal place cells, a biologically inspired model based on a multi-scale spatial representation for goal-directed navigation is proposed in order to achieve robotic spatial cognition and autonomous navigation. First, a map of the place cells is constructed in different scales, which is used for encoding the spatial environment. Then, the firing rate of the place cells in each layer is calculated by the Gaussian function as the input of the Q-learning process. The robot decides on its next direction for movement through several candidate actions according to the rules of action selection. After several training trials, the robot can accumulate experiential knowledge and thus learn an appropriate navigation policy to find its goal. The results in simulation show that, in contrast to the other two methods(G-Q, S-Q), the multi-scale model presented in this paper is not only in line with the multi-scale nature of place cells, but also has a faster learning potential to find the optimized path to the goal. Additionally, this method also has a good ability to complete the goal-directed navigation task in large space and in the environments with obstacles.

Scorm 기반 협력학습을 위한 시퀀싱 & 네비게이션 모델 (Scorm-based Sequencing & Navigation Model for Collaborative Learning)

  • 두창호;이준석
    • 디지털융복합연구
    • /
    • 제10권6호
    • /
    • pp.189-196
    • /
    • 2012
  • 본 논문에서는 학습자들의 다자간 협력학습을 위한 스콤 기반 시퀀싱 & 네비게이션 모델을 제안한다. 이 모델은 정형적 접근 방법을 기반으로 하고 있으며, 협력학습을 효율적이고 그래픽적으로 정의하기 위하여 스콤에서의 콘텐츠 집합 모델과 시퀀싱 및 네비게이션 모델에 관하여 ICN(Information Control Net) 모델을 기반으로 정의한다. ICN 모델은 프로세스를 기반으로 각 요소들의 제어 흐름을 표현하는 모델인데, 본 논문에서는 이러한 ICN 모델을 확장한 SCOSNCN(SCO Sequencing & Navigation Control Net) 모델을 활용하여 프로세스의 실행 순서 및 학습 활동을 정의하고 협력학습에 필요한 콘텐츠와 그에 따른 시퀀싱 & 네비게이션 모델 관련 사항들을 정의한다. SCOSNCN 모델에서는 협력학습을 지원하기 위해 각각의 액티비티에 교수자 및 학습자를 정의하고, 정의되어진 액티비티의 선행, 후행 조건 및 네비게이션 조건 등을 명시하여 협력학습을 위한 시퀀싱 & 네비게이션 모델을 제시한다. 또한, 협력학습 정의에 필요한 시퀀싱 & 네비게이션 기본 요소 및 역할, 그리고 이에 대한 규칙 등을 제안한다. 이에 스콤 기반 협력학습을 위한 시퀀싱 & 네비게이션 모델을 바탕으로 스콤 기반 협력학습시스템 아키텍처와 실례를 제안함으로서 향후 교수자 및 학습자뿐만 아니라 e-러닝 산업 분야 및 교육에 있어 학습 콘텐츠의 정의 및 협력학습을 통한 교육의 효율성 향상에 기여하고자 한다.

Reward Shaping for a Reinforcement Learning Method-Based Navigation Framework

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 추계학술대회
    • /
    • pp.9-11
    • /
    • 2022
  • Applying Reinforcement Learning in everyday applications and varied environments has proved the potential of the of the field and revealed pitfalls along the way. In robotics, a learning agent takes over gradually the control of a robot by abstracting the navigation model of the robot with its inputs and outputs, thus reducing the human intervention. The challenge for the agent is how to implement a feedback function that facilitates the learning process of an MDP problem in an environment while reducing the time of convergence for the method. In this paper we will implement a reward shaping system avoiding sparse rewards which gives fewer data for the learning agent in a ROS environment. Reward shaping prioritizes behaviours that brings the robot closer to the goal by giving intermediate rewards and helps the algorithm converge quickly. We will use a pseudocode implementation as an illustration of the method.

  • PDF