• Title/Summary/Keyword: Vision Navigation System

Search Result 194, Processing Time 0.027 seconds

Development of an Intelligent Unmanned Vehicle Control System (지능형 무인자동차 제어시스템 개발)

  • Kim, Yoon-Gu;Lee, Ki-Dong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.126-135
    • /
    • 2008
  • The development of an unmanned vehicle basically requires the robust and reliable performance of major functions which include global localization, lane detection, obstacle avoidance, path planning, etc. The implementation of major functional subsystems are possible by integrating and fusing data acquired from various sensory systems such as GPS, vision, ultrasonic sensor, encoder, and electric compass. This paper focuses on implementing the functional subsystems, which are designed and developed by a graphical programming tool, NI LabVIEW, and also verifying the autonomous navigation and remote control of the unmanned vehicle.

  • PDF

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Monocular Vision Based Localization System using Hybrid Features from Ceiling Images for Robot Navigation in an Indoor Environment (실내 환경에서의 로봇 자율주행을 위한 천장영상으로부터의 이종 특징점을 이용한 단일비전 기반 자기 위치 추정 시스템)

  • Kang, Jung-Won;Bang, Seok-Won;Atkeson, Christopher G.;Hong, Young-Jin;Suh, Jin-Ho;Lee, Jung-Woo;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • This paper presents a localization system using ceiling images in a large indoor environment. For a system with low cost and complexity, we propose a single camera based system that utilizes ceiling images acquired from a camera installed to point upwards. For reliable operation, we propose a method using hybrid features which include natural landmarks in a natural scene and artificial landmarks observable in an infrared ray domain. Compared with previous works utilizing only infrared based features, our method reduces the required number of artificial features as we exploit both natural and artificial features. In addition, compared with previous works using only natural scene, our method has an advantage in the convergence speed and robustness as an observation of an artificial feature provides a crucial clue for robot pose estimation. In an experiment with challenging situations in a real environment, our method was performed impressively in terms of the robustness and accuracy. To our knowledge, our method is the first ceiling vision based localization method using features from both visible and infrared rays domains. Our system can be easily utilized with a variety of service robot applications in a large indoor environment.

Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic (3차원 공간 맵핑을 통한 로봇의 경로 구현)

  • Son, Eun-Ho;Kim, Young-Chul;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.

Point Pattern Matching Based Global Localization using Ceiling Vision (천장 조명을 이용한 점 패턴 매칭 기반의 광역적인 위치 추정)

  • Kang, Min-Tae;Sung, Chang-Hun;Roh, Hyun-Chul;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1934-1935
    • /
    • 2011
  • In order for a service robot to perform several tasks, basically autonomous navigation technique such as localization, mapping, and path planning is required. The localization (estimation robot's pose) is fundamental ability for service robot to navigate autonomously. In this paper, we propose a new system for point pattern matching based visual global localization using spot lightings in ceiling. The proposed algorithm us suitable for system that demands high accuracy and fast update rate such a guide robot in the exhibition. A single camera looking upward direction (called ceiling vision system) is mounted on the head of the mobile robot and image features such as lightings are detected and tracked through the image sequence. For detecting more spot lightings, we choose wide FOV lens, and inevitably there is serious image distortion. But by applying correction calculation only for the position of spot lightings not whole image pixels, we can decrease the processing time. And then using point pattern matching and least square estimation, finally we can get the precise position and orientation of the mobile robot. Experimental results demonstrate the accuracy and update rate of the proposed algorithm in real environments.

  • PDF

Obstacle Avoidance Algorithm of a Mobile Robot using Image Information (화상 정보를 이용한 이동 로봇의 장애물 회피 알고리즘)

  • Kwon, O-Sang;Lee, Eung-Hyuk;Han, Yong-Hwan;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.2 no.1 s.2
    • /
    • pp.139-149
    • /
    • 1998
  • There are some problems in robot navigations with a single kind of sensor. We propose a system that takes advantages of both CCD camera and ultrasonic sensors for the concerning matter. A coordinate extraction algorithm to avoid obstacles during the navigation is also proposed. We implemented a CCD based vision system at the front part of the vehicle and did experiments to verify the suggested algorithm's availability. From experimental results, the error rate was reduced when a CCD camera was used rather than when only ultrasonic sensors were used. Also we can generate path to avoid those obstacles using the measured values.

  • PDF

UGV Localization using Multi-sensor Fusion based on Federated Filter in Outdoor Environments (야지환경에서 연합형 필터 기반의 다중센서 융합을 이용한 무인지상로봇 위치추정)

  • Choi, Ji-Hoon;Park, Yong Woon;Joo, Sang Hyeon;Shim, Seong Dae;Min, Ji Hong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.5
    • /
    • pp.557-564
    • /
    • 2012
  • This paper presents UGV localization using multi-sensor fusion based on federated filter in outdoor environments. The conventional GPS/INS integrated system does not guarantee the robustness of localization because GPS is vulnerable to external disturbances. In many environments, however, vision system is very efficient because there are many features compared to the open space and these features can provide much information for UGV localization. Thus, this paper uses the scene matching and pose estimation based vision navigation, magnetic compass and odometer to cope with the GPS-denied environments. NR-mode federated filter is used for system safety. The experiment results with a predefined path demonstrate enhancement of the robustness and accuracy of localization in outdoor environments.

Human following of Indoor mobile service robots with a Laser Range Finder (단일레이저거리센서를 탑재한 실내용이동서비스로봇의 사람추종)

  • Yoo, Yoon-Kyu;Kim, Ho-Yeon;Chung, Woo-Jin;Park, Joo-Young
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.1
    • /
    • pp.86-96
    • /
    • 2011
  • The human-following is one of the significant procedure in human-friendly navigation of mobile robots. There are many approaches of human-following technology. Many approaches have adopted various multiple sensors such as vision system and Laser Range Finder (LRF). In this paper, we propose detection and tracking approaches for human legs by the use of a single LRF. We extract four simple attributes of human legs. To define the boundary of extracted attributes mathematically, we used a Support Vector Data Description (SVDD) scheme. We establish an efficient leg-tracking scheme by exploiting a human walking model to achieve robust tracking under occlusions. The proposed approaches were successfully verified through various experiments.

Robust Control of Robot Manipulators using Vision Systems

  • Lee, Young-Chan;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.7 no.2
    • /
    • pp.162-170
    • /
    • 2003
  • In this paper, we propose a robust controller for trajectory control of n-link robot manipulators using feature based on visual feedback. In order to reduce tracking error of the robot manipulator due to parametric uncertainties, integral action is included in the dynamic control part of the inner control loop. The desired trajectory for tracking is generated from feature extraction by the camera mounted on the end effector. The stability of the robust state feedback control system is shown by the Lyapunov method. Simulation and experimental results on a 5-link robot manipulator with two degree of freedom show that the proposed method has good tracking performance.

  • PDF