• Title/Summary/Keyword: Autonomous Navigation System

Search Result 495, Processing Time 0.02 seconds

Low energy ultrasonic single beacon localization for testing of scaled model vehicle

  • Dubey, Awanish C.;Subramanian, V. Anantha;Kumar, V. Jagadeesh
    • Ocean Systems Engineering
    • /
    • v.9 no.4
    • /
    • pp.391-407
    • /
    • 2019
  • Tracking the location (position) of a surface or underwater marine vehicle is important as part of guidance and navigation. While the Global Positioning System (GPS) works well in an open sea environment but its use is limited whenever testing scaled-down models of such vehicles in the laboratory environment. This paper presents the design, development and implementation of a low energy ultrasonic augmented single beacon-based localization technique suitable for such requirements. The strategy consists of applying Extended Kalman Filter (EKF) to achieve location tracking from basic dynamic distance measurements of the moving model from a fixed beacon, while on-board motion sensor measures heading angle and velocity. Iterative application of the Extended Kalman Filter yields x and y co-ordinate positions of the moving model. Tests performed on a free-running ship model in a wave basin facility of dimension 30 m by 30 m by 3 m water depth validate the proposed model. The test results show quick convergence with an error of few centimeters in the estimated position of the ship model. The proposed technique has application in the real field scenario by replacing the ultrasonic sensor with industrial grade long range acoustic modem. As compared with the existing systems such as LBL, SBL, USBL and others localization techniques, the proposed technique can save deployment cost and also cut the cost on number of acoustic modems involved.

Self-localization for Mobile Robot Navigation using an Active Omni-directional Range Sensor (전방향 능동 거리 센서를 이용한 이동로봇의 자기 위치 추정)

  • Joung, In-Soo;Cho, Hyung-Suck
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.1 s.94
    • /
    • pp.253-264
    • /
    • 1999
  • Most autonomous mobile robots view only things in front of them, and as a result, they may collide with objects moving from the side or behind. To overcome this problem. an Active Omni-directional Range Sensor System has been built that can obtain an omni-directional range data through the use of a laser conic plane and a conic mirror. Also, mobile robot has to know its current location and heading angle by itself as accurately as possible to successfully navigate in real environments. To achieve this capability, we propose a self-localization algorithm of a mobile robot using an active omni-directional range sensor in an unknown environment. The proposed algorithm estimates the current position and head angle of a mobile robot by a registration of the range data obtained at two positions, current and previous. To show the effectiveness of the proposed algorithm, a series of simulations was conducted and the results show that the proposed algorithm is very efficient, and can be utilized for self-localization of a mobile robot in an unknown environment.

  • PDF

Recent Technologies for the Acquisition and Processing of 3D Images Based on Deep Learning (딥러닝기반 입체 영상의 획득 및 처리 기술 동향)

  • Yoon, M.S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.112-122
    • /
    • 2020
  • In 3D computer graphics, a depth map is an image that provides information related to the distance from the viewpoint to the subject's surface. Stereo sensors, depth cameras, and imaging systems using an active illumination system and a time-resolved detector can perform accurate depth measurements with their own light sources. The 3D image information obtained through the depth map is useful in 3D modeling, autonomous vehicle navigation, object recognition and remote gesture detection, resolution-enhanced medical images, aviation and defense technology, and robotics. In addition, the depth map information is important data used for extracting and restoring multi-view images, and extracting phase information required for digital hologram synthesis. This study is oriented toward a recent research trend in deep learning-based 3D data analysis methods and depth map information extraction technology using a convolutional neural network. Further, the study focuses on 3D image processing technology related to digital hologram and multi-view image extraction/reconstruction, which are becoming more popular as the computing power of hardware rapidly increases.

Thinning-Based Topological Map Building for Local and Global Environments (지역 및 전역 환경에 대한 세선화 기반 위상지도의 작성)

  • Kwon Tae-Bum;Song Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.7
    • /
    • pp.693-699
    • /
    • 2006
  • An accurate and compact map is essential to an autonomous mobile robot system. For navigation, it is efficient to use an occupancy grid map because the environment is represented by probability distribution. But it is difficult to apply it to the large environment since it needs a large amount of memory proportional to the environment size. As an alternative, a topological map can be used to represent it in terms of the discrete nodes with edges connecting them. It is usually constructed by the Voronoi-like graphs, but in this paper the topological map is incrementally built based on the local grid map using the thinning algorithm. This algorithm can extract only meaningful topological information by using the C-obstacle concept in real-time and is robust to the environment change, because its underlying local grid map is constructed based on the Bayesian update formula. In this paper, the position probability is defined to evaluate the quantitative reliability of the end nodes of this thinning-based topological map (TTM). The global TTM can be constructed by merging each local TTM by matching the reliable end nodes determined by the position probability. It is shown that the proposed TTM can represent the environment accurately in real-time and it is readily extended to the global TTM.

A Fuzzy Controller for Obstacle Avoidance Robots and Lower Complexity Lookup-Table Sharing Method Applicable to Real-time Control Systems (이동 로봇의 장애물회피를 위한 퍼지제어기와 실시간 제어시스템 적용을 위한 저(低)복잡도 검색테이블 공유기법)

  • Kim, Jin-Wook;Kim, Yoon-Gu;An, Jin-Ung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.27 no.2
    • /
    • pp.60-69
    • /
    • 2010
  • Lookup-Table (LUT) based fuzzy controller for obstacle avoidance enhances operations faster in multiple obstacles environment. An LUT based fuzzy controller with Positive/Negative (P/N) fuzzy rule base consisting of 18 rules was introduced in our paper$^1$ and this paper shows a 50-rule P/N fuzzy controller for enhancing performance in obstacle avoidance. As a rule, the more rules are necessary, the more buffers are required. This paper suggests LUT sharing method in order to reduce LUT buffer size without significant degradation of performance. The LUT sharing method makes buffer size independent of the whole fuzzy system's complexity. Simulation using MSRDS(MicroSoft Robotics Developer Studio) evaluates the proposed method, and in order to investigate its performance, experiments are carried out to Pioneer P3-DX in the LabVIEW environment. The simulation and experiments show little difference between the fully valued LUT-based method and the LUT sharing method in operation times. On the other hand, LUT sharing method reduced its buffer size by about 95% of full valued LUT-based design.

The Autonomous Ship Direction Discrimination System using Image Recognition (영상 인식을 활용한 자동 선박 방향 식별 시스템)

  • Park, Choon-Suck;Seo, Jong-Hoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2008.06a
    • /
    • pp.257-262
    • /
    • 2008
  • 컴퓨팅 기술의 발전에 따라 선박의 안전항해를 지원하기 위해 Radar, GPS 등 다양한 장비들이 계량, 개발되고 있으며 그들은 선박 항해에 필요한 많은 정보를 제공하고 있다. 하지만 여전히 선박 충돌사고는 끊이지 않고 있으며, 선박 대형화에 힘입어 그 피해도 커지고 있는 실정이다. 이러한 선박 충돌사고는 앞에서 언급한 선박 항해 안전 장비의 성능제약을 받는 야간이나, 해상 환경 악화 시 두드러지게 발생하고 있으며, 특히 제한적인 상황에서 인간의 눈에만 의지해서 항해를 하고 있기 때문이기도 하다. 그래서 이러한 상황에서 Vision기술을 사용하여 카메라를 활용 상대선박을 자동으로 식별하는 시스템을 제안하고자 한다. 이는 선박들이 법적으로 야간이나 각종 장비들이 제한을 받는 상황에서 근처의 다른 선박에게 상황을 전달하기 위해서 등화(불빛)와 형상물을 사용해야한다는 점에서 착안하였다. 제안 시스템을 실제 해상 환경에서 실험하기에 제한점이 많아 프로토타입을 구현하여 실험실 환경에서 실험하고 사용자 평가를 실시하였다. 즉, LED를 가상 등화로 하여 선박에 설치된 것과 동일한 색상과 동일한 위치에 배치하고 이를 카메라를 활용하여 인식 실험을 하였으며 약 90%의 인식률을 보였다. 그리고 이러한 실험화면을 활용하여 항해업무 종사자 15명을 대상으로 사용자 평가를 실시하였으며 대부분의 사람들이 제안된 체계가 해상에서 유용하다고 답변하였다.

  • PDF

Robust Terrain Classification Against Environmental Variation for Autonomous Off-road Navigation (야지 자율주행을 위한 환경에 강인한 지형분류 기법)

  • Sung, Gi-Yeul;Lyou, Joon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.13 no.5
    • /
    • pp.894-902
    • /
    • 2010
  • This paper presents a vision-based robust off-road terrain classification method against environmental variation. As a supervised classification algorithm, we applied a neural network classifier using wavelet features extracted from wavelet transform of an image. In order to get over an effect of overall image feature variation, we adopted environment sensors and gathered the training parameters database according to environmental conditions. The robust terrain classification algorithm against environmental variation was implemented by choosing an optimal parameter using environmental information. The proposed algorithm was embedded on a processor board under the VxWorks real-time operating system. The processor board is containing four 1GHz 7448 PowerPC CPUs. In order to implement an optimal software architecture on which a distributed parallel processing is possible, we measured and analyzed the data delivery time between the CPUs. And the performance of the present algorithm was verified, comparing classification results using the real off-road images acquired under various environmental conditions in conformity with applied classifiers and features. Experiments show the robustness of the classification results on any environmental condition.

Unit Mission Based Mission Planning and Automatic Mission Management for Robots (단위임무 기반 로봇의 임무 계획 및 자동화 임무 관리 방법론)

  • Lee, Ho-Joo;Park, Won-Ik;Kim, Do-Jong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.17 no.1
    • /
    • pp.1-7
    • /
    • 2014
  • In this paper, it is suggested a method of mission planning and management for robots based on the unit mission. In order to make robots execute given missions continuously as time goes by, a new concept for planning the mission which is composed of one or more unit missions and an automatic mission management scheme are developed. For managing robot's missions in real time, six management methods are devised as well in order to cope with the mismatches, which occur frequently during the mission execution, as to the initial plan. Without the operator's involvement, any mismatch can be adjusted automatically by applying one of the mission management methods. The suggested concept of mission planning and mission management methods based on the unit mission are partially realized in the Dog-Horse robot system and it is checked that it can be a viable one for developing effective robot operation systems.

Requirements Analysis of Image-Based Positioning Algorithm for Vehicles

  • Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.397-402
    • /
    • 2019
  • Recently, with the emergence of autonomous vehicles and the increasing interest in safety, a variety of research has been being actively conducted to precisely estimate the position of a vehicle by fusing sensors. Previously, researches were conducted to determine the location of moving objects using GNSS (Global Navigation Satellite Systems) and/or IMU (Inertial Measurement Unit). However, precise positioning of a moving vehicle has lately been performed by fusing data obtained from various sensors, such as LiDAR (Light Detection and Ranging), on-board vehicle sensors, and cameras. This study is designed to enhance kinematic vehicle positioning performance by using feature-based recognition. Therefore, an analysis of the required precision of the observations obtained from the images has carried out in this study. Velocity and attitude observations, which are assumed to be obtained from images, were generated by simulation. Various magnitudes of errors were added to the generated velocities and attitudes. By applying these observations to the positioning algorithm, the effects of the additional velocity and attitude information on positioning accuracy in GNSS signal blockages were analyzed based on Kalman filter. The results have shown that yaw information with a precision smaller than 0.5 degrees should be used to improve existing positioning algorithms by more than 10%.

MMS Data Accuracy Evaluation by Distance of Reference Point for Construction of Road Geospatial Information (도로공간정보 구축을 위한 기준점 거리 별 MMS 성과물의 정확도 평가)

  • Lee, Keun Wang;Park, Joon Kyu
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.549-554
    • /
    • 2021
  • Precise 3D road geospatial information is the basic infrastructure for autonomous driving and is essential data for safe autonomous driving. MMS (Mobile Mapping System) is being used as equipment for road spatial information construction, and related research is being conducted. However, there are insufficient studies to analyze the effect of the baseline reference point distance, which is an important factor in the accuracy of the MMS outcome, on the accuracy of the outcome. Therefore, in this study, the accuracy of the data acquired using MMS by reference point distance was analyzed. Point cloud data was constructed using MMS for the road in the study site. For data processing, 4 data were constructed considering the distance from the reference point for MMS data, and the accuracy was analyzed by comparing the results of 12 checkpoints for accuracy evaluation. The accuracy of the MMS data showed a difference of -0.09 m to 0.11 m in the horizontal direction and 0.04 m to 0.19 m in the height direction. The error in the vertical direction was larger than that in the horizontal direction, and it was found that the accuracy decreased as the distance from the reference point increased. In addition, as the length of the road increases, the distance from the reference point may vary, so additional research is needed. If the accuracy evaluation of the method using multiple reference points is made in the future, it will be possible to present an effective method of using reference points for the construction of precise road spatial information.