• Title/Summary/Keyword: Vision-based positioning

Search Result 71, Processing Time 0.026 seconds

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

High Accuracy Vision-Based Positioning Method at an Intersection

  • Manh, Cuong Nguyen;Lee, Jaesung
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.114-124
    • /
    • 2018
  • This paper illustrates a vision-based vehicle positioning method at an intersection to support the C-ITS. It removes the minor shadow that causes the merging problem by simply eliminating the fractional parts of a quotient image. In order to separate the occlusion, it firstly performs the distance transform to analyze the contents of the single foreground object to find seeds, each of which represents one vehicle. Then, it applies the watershed to find the natural border of two cars. In addition, a general vehicle model and the corresponding space estimation method are proposed. For performance evaluation, the corresponding ground truth data are read and compared with the vision-based detected data. In addition, two criteria, IOU and DEER, are defined to measure the accuracy of the extracted data. The evaluation result shows that the average value of IOU is 0.65 with the hit ratio of 97%. It also shows that the average value of DEER is 0.0467, which means the positioning error is 32.7 centimeters.

Self-Positioning of a Mobile Robot using a Vision System and Image Overlay with VRML (비전 시스템을 이용한 이동로봇 Self-positioning과 VRML과의 영상오버레이)

  • Hyun, Kwon-Bang;To, Chong-Kil
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.258-260
    • /
    • 2005
  • We describe a method for localizing a mobile robot in the working environment using a vision system and VRML. The robot identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching technique are employed to recognize landmarks placed in a robot working environment. The robot self-positioning using vision system is based on the well-known localization algorithm. After self-positioning, 2D scene is overlaid with VRML scene. This paper describes how to realize the self-positioning and shows the result of overlaying between 2D scene and VRML scene. In addition we describe the advantage expected from overlapping both scenes.

  • PDF

A Hybrid Positioning System for Indoor Navigation on Mobile Phones using Panoramic Images

  • Nguyen, Van Vinh;Lee, Jong-Weon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.3
    • /
    • pp.835-854
    • /
    • 2012
  • In this paper, we propose a novel positioning system for indoor navigation which helps a user navigate easily to desired destinations in an unfamiliar indoor environment using his mobile phone. The system requires only the user's mobile phone with its basic equipped sensors such as a camera and a compass. The system tracks user's positions and orientations using a vision-based approach that utilizes $360^{\circ}$ panoramic images captured in the environment. To improve the robustness of the vision-based method, we exploit a digital compass that is widely installed on modern mobile phones. This hybrid solution outperforms existing mobile phone positioning methods by reducing the error of position estimation to around 0.7 meters. In addition, to enable the proposed system working independently on mobile phone without the requirement of additional hardware or external infrastructure, we employ a modified version of a fast and robust feature matching scheme using Histogrammed Intensity Patch. The experiments show that the proposed positioning system achieves good performance while running on a mobile phone with a responding time of around 1 second.

Path finding via VRML and VISION overlay for Autonomous Robotic (로봇의 위치보정을 통한 경로계획)

  • Sohn, Eun-Ho;Park, Jong-Ho;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.527-529
    • /
    • 2006
  • In this paper, we find a robot's path using a Virtual Reality Modeling Language and overlay vision. For correct robot's path we describe a method for localizing a mobile robot in its working environment using a vision system and VRML. The robt identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Loosely-Coupled Vision/INS Integrated Navigation System

  • Kim, Youngsun;Hwang, Dong-Hwan
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.6 no.2
    • /
    • pp.59-70
    • /
    • 2017
  • Since GPS signals are vulnerable to interference and obstruction, many alternate aiding systems have been proposed to integrate with an inertial navigation system. Among these alternate systems, the vision-aided method has become more attractive due to its benefits in weight, cost and power consumption. This paper proposes a loosely-coupled vision/INS integrated navigation method which can work in GPS-denied environments. The proposed method improves the navigation accuracy by correcting INS navigation and sensor errors using position and attitude outputs of a landmark based vision navigation system. Furthermore, it has advantage to provide redundant navigation output regardless of INS output. Computer simulations and the van tests have been carried out in order to show validity of the proposed method. The results show that the proposed method works well and gives reliable navigation outputs with better performance.

Korean Wide Area Differential Global Positioning System Development Status and Preliminary Test Results

  • Yun, Ho;Kee, Chang-Don;Kim, Do-Yoon
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.274-282
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

A Path tracking algorithm and a VRML image overlay method (VRML과 영상오버레이를 이용한 로봇의 경로추적)

  • Sohn, Eun-Ho;Zhang, Yuanliang;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.907-908
    • /
    • 2006
  • We describe a method for localizing a mobile robot in its working environment using a vision system and Virtual Reality Modeling Language (VRML). The robot identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Lane-Level Positioning based on 3D Tracking Path of Traffic Signs (교통 표지판의 3차원 추적 경로를 이용한 자동차의 주행 차로 추정)

  • Park, Soon-Yong;Kim, Sung-ju
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.172-182
    • /
    • 2016
  • Lane-level vehicle positioning is an important task for enhancing the accuracy of in-vehicle navigation systems and the safety of autonomous vehicles. GPS (Global Positioning System) and DGPS (Differential GPS) are generally used in navigation service systems, which however only provide an accuracy level up to 2~3 m. In this paper, we propose a 3D vision based lane-level positioning technique which can provides accurate vehicle position. The proposed method determines the current driving lane of a vehicle by tracking the 3D position of traffic signs which stand at the side of the road. Using a stereo camera, the 3D tracking paths of traffic signs are computed and their projections to the 2D road plane are used to determine the distance from the vehicle to the signs. Several experiments are performed to analyze the feasibility of the proposed method in many real roads. According to the experimental results, the proposed method can achieve 90.9% accuracy in lane-level positioning.

A Bimodal Approach for Land Vehicle Localization

  • Kim, Seong-Baek;Choi, Kyung-Ho;Lee, Seung-Yong;Choi, Ji-Hoon;Hwang, Tae-Hyun;Jang, Byung-Tae;Lee, Jong-Hun
    • ETRI Journal
    • /
    • v.26 no.5
    • /
    • pp.497-500
    • /
    • 2004
  • In this paper, we present a novel idea to integrate a low cost inertial measurement unit (IMU) and Global Positioning System (GPS) for land vehicle localization. By taking advantage of positioning data calculated from an image based on photogrammetry and stereo-vision techniques, errors caused by a GPS outage for land vehicle localization were significantly reduced in the proposed bimodal approach. More specifically, positioning data from the photogrammetric approach are fed back into the Kalman filter to reduce and compensate for IMU errors and improve the performance. Experimental results are presented to show the robustness of the proposed method, which can be used to reduce positioning errors caused by a low cost IMU when a GPS signal is not available in urban areas.

  • PDF