• Title/Summary/Keyword: Vision-Based Navigation

Search Result 195, Processing Time 0.028 seconds

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

Planar Region Extraction for Visual Navigation using Stereo Cameras

  • Lee, Se-Na;You, Bum-Jae;Ko, Sung-Jea
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.681-686
    • /
    • 2003
  • In this paper, we propose an algorithm to extract valid planar regions from stereo images for visual navigation of mobile robots. The algorithm is based on the difference image between the stereo images obtained by applying Homography matrix between stereo cameras. Illegal planar regions are filtered out by the use of labeling of the difference images and filtering of invalid blobs using the size of each blob. Also, illegal large planar regions such as walls are removed by adopting a weighted low-pass filtering of the difference image using the past difference images. The algorithms are experimented successfully by the use of stereo camera system built in a mobile robot and a PC-based real-time vision system.

  • PDF

Localization of AUV Using Visual Shape Information of Underwater Structures (수중 구조물 형상의 영상 정보를 이용한 수중로봇 위치인식 기법)

  • Jung, Jongdae;Choi, Suyoung;Choi, Hyun-Taek;Myung, Hyun
    • Journal of Ocean Engineering and Technology
    • /
    • v.29 no.5
    • /
    • pp.392-397
    • /
    • 2015
  • An autonomous underwater vehicle (AUV) can perform flexible operations even in complex underwater environments because of its autonomy. Localization is one of the key components of this autonomous navigation. Because the inertial navigation system of an AUV suffers from drift, observing fixed objects in an inertial reference system can enhance the localization performance. In this paper, we propose a method of AUV localization using visual measurements of underwater structures. A camera measurement model that emulates the camera’s observations of underwater structures is designed in a particle filtering framework. Then, the particle weight is updated based on the extracted visual information of the underwater structures. The proposed method is validated based on the results of experiments performed in a structured basin environment.

Curve-Modeled Lane Detection based GPS Lateral Error Correction Enhancement (곡선모델 차선검출 기반의 GPS 횡방향 오차보정 성능향상 기법)

  • Lee, Byung-Hyun;Im, Sung-Hyuck;Heo, Moon-Beom;Jee, Gyu-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.81-86
    • /
    • 2015
  • GPS position errors were corrected for guidance of autonomous vehicles. From the vision, we can obtain the lateral distance from the center of lane and the angle difference between the left and right detected line. By using a controller which makes these two measurements zero, a lane following system can be easily implemented. However, the problem is that if there's no lane, such as crossroad, the guidance system of autonomous vehicle does not work. In addition, Line detection has problems working on curved areas. In this case, the lateral distance measurement has an error because of a modeling mismatch. For this reason, we propose GPS error correction filter based on curve-modeled lane detection and evaluated the performance applying it to an autonomous vehicle at the test site.

Lane-Level Positioning based on 3D Tracking Path of Traffic Signs (교통 표지판의 3차원 추적 경로를 이용한 자동차의 주행 차로 추정)

  • Park, Soon-Yong;Kim, Sung-ju
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.172-182
    • /
    • 2016
  • Lane-level vehicle positioning is an important task for enhancing the accuracy of in-vehicle navigation systems and the safety of autonomous vehicles. GPS (Global Positioning System) and DGPS (Differential GPS) are generally used in navigation service systems, which however only provide an accuracy level up to 2~3 m. In this paper, we propose a 3D vision based lane-level positioning technique which can provides accurate vehicle position. The proposed method determines the current driving lane of a vehicle by tracking the 3D position of traffic signs which stand at the side of the road. Using a stereo camera, the 3D tracking paths of traffic signs are computed and their projections to the 2D road plane are used to determine the distance from the vehicle to the signs. Several experiments are performed to analyze the feasibility of the proposed method in many real roads. According to the experimental results, the proposed method can achieve 90.9% accuracy in lane-level positioning.

Camera Calibration for Machine Vision Based Autonomous Vehicles (머신비젼 기반의 자율주행 차량을 위한 카메라 교정)

  • Lee, Mun-Gyu;An, Taek-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

Linear Velocity Control of the Mobile Robot with the Vision System at Corridor Navigation (비전 센서를 갖는 이동 로봇의 복도 주행 시 직진 속도 제어)

  • Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.896-902
    • /
    • 2007
  • This paper proposes a vision-based kinematic control method for mobile robots with camera-on-board. In the previous literature on the control of mobile robots using camera vision information, the forward velocity is set to be a constant, and only the rotational velocity of the robot is controlled. More efficient motion, however, is needed by controlling the forward velocity, depending on the position in the corridor. Thus, both forward and rotational velocities are controlled in the proposed method such that the mobile robots can move faster when the comer of the corridor is far away, and it slows down as it approaches the dead end of the corridor. In this way, the smooth turning motion along the corridor is possible. To this end, visual information using the camera is used to obtain the perspective lines and the distance from the current robot position to the dead end. Then, the vanishing point and the pseudo desired position are obtained, and the forward and rotational velocities are controlled by the LOS(Line Of Sight) guidance law. Both numerical and experimental results are included to demonstrate the validity of the proposed method.

A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot (LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행)

  • Kim, Hyun Woo;Hawng, Yo-Seup;Kim, Yun-Ki;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.

Two Feature Points Based Laser Scanner for Mobile Robot Navigation (레이저 센서에서 두 개의 특징점을 이용한 이동로봇의 항법)

  • Kim, Joo-Wan;Shim, Duk-Sun
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Mobile robots use various sensors for navigation such as wheel encoder, vision sensor, sonar, and laser sensors. Dead reckoning is used with wheel encoder, resulting in the accumulation of positioning errors. For that reason wheel encoder can not be used alone. Too much information of vision sensors leads to an increase in the number of features and complexity of perception scheme. Also Sonar sensor is not suitable for positioning because of its poor accuracy. On the other hand, laser sensor provides accurate distance information relatively. In this paper we propose to extract the angular information from the distance information of laser range finder and use the Kalman filter that match the heading and distance of the laser range finder and those of wheel encoder. For laser scanner with one feature point error may increase much when the feature point is variant or jumping to a new feature point. To solve the problem, we propose to use two feature points and show that the positioning error can be reduced much.

Korean Wide Area Differential Global Positioning System Development Status and Preliminary Test Results

  • Yun, Ho;Kee, Chang-Don;Kim, Do-Yoon
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.274-282
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.