• Title/Summary/Keyword: autonomous map building

Search Result 60, Processing Time 0.025 seconds

Study on Map Building Performance Using OSM in Virtual Environment for Application to Self-Driving Vehicle (가상환경에서 OSM을 활용한 자율주행 실증 맵 성능 연구)

  • MinHyeok Baek;Jinu Pahk;JungSeok Shim;SeongJeong Park;YongSeob Lim;GyeungHo Choi
    • Journal of Auto-vehicle Safety Association
    • /
    • v.15 no.2
    • /
    • pp.42-48
    • /
    • 2023
  • In recent years, automated vehicles have garnered attention in the multidisciplinary research field, promising increased safety on the road and new opportunities for passengers. High-Definition (HD) maps have been in development for many years as they offer roadmaps with inch-perfect accuracy and high environmental fidelity, containing precise information about pedestrian crossings, traffic lights/signs, barriers, and more. Demonstrating autonomous driving requires verification of driving on actual roads, but this can be challenging, time-consuming, and costly. To overcome these obstacles, creating HD maps of real roads in a simulation and conducting virtual driving has become an alternative solution. However, existing HD maps using high-precision data are expensive and time-consuming to build, which limits their verification in various environments and on different roads. Thus, it is challenging to demonstrate autonomous driving on anything other than extremely limited roads and environments. In this paper, we propose a new and simple method for implementing HD maps that are more accessible for autonomous driving demonstrations. Our HD map combines the CARLA simulator and OpenStreetMap (OSM) data, which are both open-source, allowing for the creation of HD maps containing high-accuracy road information globally with minimal dependence. Our results show that our easily accessible HD map has an accuracy of 98.28% for longitudinal length on straight roads and 98.42% on curved roads. Moreover, the accuracy for the lateral direction for the road width represented 100% compared to the manual method reflected with the exact road data. The proposed method can contribute to the advancement of autonomous driving and enable its demonstration in diverse environments and on various roads.

Image-based Localization Recognition System for Indoor Autonomous Navigation (실내 자율 비행을 위한 영상 기반의 위치 인식 시스템)

  • Moon, SungTae;Cho, Dong-Hyun;Han, Sang-Hyuck
    • Aerospace Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.128-136
    • /
    • 2013
  • Recently, the localization recognition system research has been studied using various sensors according to increased interest in autonomous navigation flight. In case of indoor environment which cannot support GPS information, we have to look for another way to recognize current position. The Image-based localization recognition system has been interested although there are lots of way to know current pose. In this paper, we explain the localization recognition system based on mark and implementation of autonomous navigation flight. In order to apply to real environment which cannot support marks, localization based on real-time 3D map building is discussed.

Building of Occupancy Grid Map of an Autonomous Mobile Robot Based on Stereo Vision (스테레오 비전 방식을 이용한 자율 이동로봇의 격자지도 작성)

  • Kim, Jong-Hyup;Choi, Chang-Hyuk;Song, Jae-Bok;Park, Sung-Kee;Kim, Mun-Sang
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.5
    • /
    • pp.36-42
    • /
    • 2002
  • This paper presents the way of building an occupancy grid map which a mobile robot needs to autonomously navigate in the unknown environment. A disparity map resulting from stereo matching can be converted into the 2D distance information. If the stereo matching has some errors, however, the subsequent map becomes unreliable. In this paper, a new morphological filter is proposed to reject 'spikes' of the disparity map due to stereo mismatch by considering the fact that these spikes occur locally. The new method has advantages that it is simpler and more easily realized than existing similar algorithms. Several occupancy grid maps based on stereo vision using the proposed algorithm have been built and compared with the actual distance information to verify the validity of the proposed method.

Localization of an Autonomous Mobile Robot Using Ultrasonic Sensor Data (초음파센서를 이용한 자율 이동로봇의 위치추적)

  • 최창혁;송재복;김문상
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.666-669
    • /
    • 2000
  • Localization is the process of aligning the robot's local coordinates with the global coordinates of a map. A mobile robot's location is basically computed by a dead reckoning scheme, but this position information becomes increasingly inaccurate during navigation due to odometry errors. In this paper, the method of building a map of a robot's environment using ultrasonic sensor data and the occupancy grid map scheme is briefly presented. Then, the search and matching algorithms to compensate for the odometry error by comparing the local map with the reference map are proposed and verified by experiments. It is shown that the compensated error is not accumulated and exists within the limited range.

  • PDF

Global Map Building and Navigation of Mobile Robot Based on Ultrasonic Sensor Data Fusion

  • Kang, Shin-Chul;Jin, Tae-Seok
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.3
    • /
    • pp.198-204
    • /
    • 2007
  • In mobile robotics, ultrasonic sensors became standard devices for collision avoiding. Moreover, their applicability for map building and navigation has exploited in recent years. In this paper, as the preliminary step for developing a multi-purpose autonomous carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as ultrasonic sensor, IR sensor for mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. The global map building based on multi-sensor data fusion is applied for recognition an obstacle free path from a starting position to a known goal region, and simultaneously build a map of straight line segment geometric primitives based on the application of the Hough transform from the actual and noisy sonar data. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, Hough transform, since there exist several recent thorough books and review paper on this paper. Experimental results with a real Pioneer DX2 mobile robot will demonstrate the effectiveness of the discussed methods.

A Technique for Building Occupancy Maps Using Stereo Depth Information and Its Application (스테레오 깊이 정보를 이용한 점유맵 구축 기법과 응용)

  • Kim, Nak-Hyun;Oh, Se-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.3
    • /
    • pp.1-10
    • /
    • 2008
  • An occupancy map is a representation methodology describing the region occupied by objects in 3D space, which can be utilized for autonomous navigation and object recognition. In this paper, we describe a technique for building an occupancy map using depth data extracted from stereo images. In addition, some techniques are proposed for utilizing the occupancy map for the segmentation of object regions. After the geometric information on the ground plane is extracted from a disparity image, the occupancy map is constructed by projecting each matched point to the ground plane-based 3D space. We explain techniques for extracting moving object regions using the occupancy map and present experimental results using real stereo images.

A Robot Coverage Algorithm Integrated with SLAM for Unknown Environments (미지의 환경에서 동작하는 SLAM 기반의 로봇 커버리지 알고리즘)

  • Park, Jung-Kyu;Jeon, Heung-Seok;Noh, Sam-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.61-69
    • /
    • 2010
  • An autonomous robot must have a global workspace map in order to cover the complete workspace. However, most previous coverage algorithms assume that they have a grid workspace map that is to be covered before running the task. For this reason, most coverage algorithms can not be applied to complete coverage tasks in unknown environments. An autonomous robot has to build a workspace map by itself for complete coverage in unknown environments. Thus, we propose a new DmaxCoverage algorithm that allows a robot to carry out a complete coverage task in unknown environments. This algorithm integrates a SLAM algorithm for simultaneous workspace map building. Experimentally, we verify that DmaxCoverage algorithm is more efficient than previous algorithms.

Autonomous Omni-Directional Cleaning Robot System Design

  • Choi, Jun-Yong;Ock, Seung-Ho;Kim, San;Kim, Dong-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2019-2023
    • /
    • 2005
  • In this paper, an autonomous omni directional cleaning robot which recognizes an obstacle and a battery charger is introduced. It utilizes a robot vision, ultra sonic sensors, and infrared sensors information along with appropriate algorithm. Three omni-directional wheels make the robot move any direction, enabling a faster maneuvering than a simple track typed robot. The robot system transfers command and image data through Blue-tooth wireless modules to be operated in a remote place. The robot vision associated with sensor data makes the robot proceed in an autonomous behavior. An autonomous battery charger searching is implemented by using a map-building which results in overcoming the error due to the slip on the wheels, and camera and sensor information.

  • PDF

Reliable Navigation of a Mobile Robot in Cluttered Environment by Combining Evidential Theory and Fuzzy Controller (추론 이론과 퍼지 컨트롤러 결합에 의한 이동 로봇의 자유로운 주변 환경 인식)

  • 김영철;조성배;오상록
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.05a
    • /
    • pp.136-139
    • /
    • 2001
  • This paper develops a sensor based navigation method that utilizes fuzzy logic and the Dempster-Shafer evidence theory for mobile robot in uncertain environment. The proposed navigator consists of two behaviors: obstacle avoidance and goal seeking. To navigate reliably in the environment, we make a map building process before the robot finds a goal position and create a robust fuzzy controller. In this paper, the map is constructed on a two-dimensional occupancy grid. The sensor readings are fused into the map using D-S inference rule. Whenever the robot moves, it catches new information about the environment and replaces the old map with new one. With that process the robot can go wandering and finding the goal position. The usefulness of the proposed method is verified by a series of simulations. This paper deals with the fuzzy modeling for the complex and uncertain nonlinear systems, in which conventional and mathematical models may fail to give satisfactory results. Finally, we provide numerical examples to evaluate the feasibility and generality of the proposed method in this paper.

  • PDF

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF