• Title/Summary/Keyword: Vision navigation

Search Result 314, Processing Time 0.025 seconds

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

Observability Analysis of a Vision-INS Integrated Navigation System Using Landmark (비전센서와 INS 기반의 항법 시스템 구현 시 랜드마크 사용에 따른 가관측성 분석)

  • Won, Dae-Hee;Chun, Se-Bum;Sung, Sang-Kyung;Cho, Jin-Soo;Lee, Young-Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.3
    • /
    • pp.236-242
    • /
    • 2010
  • A GNSS/INS integration system can not provide navigation solutions if there are no available satellites. To overcome this problem, a vision sensor is integrated with this system. Since generally a vision aided integration system uses only feature point to compute navigation solutions, it has a problem in observability. In this case, additional landmarks, which is priory known points, can improve the observability. In this paper, the observability is evaluated using TOM/SOM matrix and Eigenvalues. There are always the observability problems in the feature-point-only case, but the landmark-use case is fully observable after the $2^{nd}$ update time. Consequently the landmarks ensure full observability, so the system performance can be improved.

Design and Implementation of Unmanned Surface Vehicle JEROS for Jellyfish Removal (해파리 퇴치용 자율 수상 로봇의 설계 및 구현)

  • Kim, Donghoon;Shin, Jae-Uk;Kim, Hyongjin;Kim, Hanguen;Lee, Donghwa;Lee, Seung-Mok;Myung, Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.51-57
    • /
    • 2013
  • Recently, the number of jellyfish has been rapidly grown because of the global warming, the increase of marine structures, pollution, and etc. The increased jellyfish is a threat to the marine ecosystem and induces a huge damage to fishery industries, seaside power plants, and beach industries. To overcome this problem, a manual jellyfish dissecting device and pump system for jellyfish removal have been developed by researchers. However, the systems need too many human operators and their benefit to cost is not so good. Thus, in this paper, the design, implementation, and experiments of autonomous jellyfish removal robot system, named JEROS, have been presented. The JEROS consists of an unmanned surface vehicle (USV), a device for jellyfish removal, an electrical control system, an autonomous navigation system, and a vision-based jellyfish detection system. The USV was designed as a twin hull-type ship, and a jellyfish removal device consists of a net for gathering jellyfish and a blades-equipped propeller for dissecting jellyfish. The autonomous navigation system starts by generating an efficient path for jellyfish removal when the location of jellyfish is received from a remote server or recognized by a vision system. The location of JEROS is estimated by IMU (Inertial Measurement Unit) and GPS, and jellyfish is eliminated while tracking the path. The performance of the vision-based jellyfish recognition, navigation, and jellyfish removal was demonstrated through field tests in the Masan and Jindong harbors in the southern coast of Korea.

VFH-based Navigation using Monocular Vision (단일 카메라를 이용한 VFH기반의 실시간 주행 기술 개발)

  • Park, Se-Hyun;Hwang, Ji-Hye;Ju, Jin-Sun;Ko, Eun-Jeong;Ryu, Juang-Tak;Kim, Eun-Yi
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.2
    • /
    • pp.65-72
    • /
    • 2011
  • In this paper, a real-time monocular vision based navigation system is developed for the disabled people, where online background learning and vector field histogram are used for identifying obstacles and recognizing avoidable paths. The proposed system is performed by three steps: obstacle classification, occupancy grid map generation and VFH-based path recommendation. Firstly, the obstacles are discriminated from images by subtracting with background model which is learned in real time. Thereafter, based on the classification results, an occupancy map sized at $32{\times}24$ is produced, each cell of which represents its own risk by 10 gray levels. Finally, the polar histogram is drawn from the occupancy map, then the sectors corresponding to the valley are chosen as safe paths. To assess the effectiveness of the proposed system, it was tested with a variety of obstacles at indoors and outdoors, then it showed the a'ccuracy of 88%. Moreover, it showed the superior performance when comparing with sensor based navigation systems, which proved the feasibility of the proposed system in using assistive devices of disabled people.

3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor (광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템)

  • Joe, Young Jin;Oh, Hyun Min;Kim, Min Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

An Optimal Position and Orientation of Stereo Camera (스테레오 카메라의 최적 위치 및 방향)

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Jung, Sung-Hun
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.3
    • /
    • pp.354-360
    • /
    • 2013
  • A stereo vision analysis was performed for motion and depth control of unmanned vehicles. In stereo vision, the depth information in three-dimensional coordinates can be obtained by triangulation after identifying points between the stereo image. However, there are always triangulation errors due to several reasons. Such errors in the vision triangulation can be alleviated by careful arrangement of the camera position and orientation. In this paper, an approach to the determination of the optimal position and orientation of camera is presented for unmanned vehicles.

A Study on the Construction of Omnidirecional Vision System for the Mobile Robot's the Autonomous Navigation (이동로봇의 자율주행을 위한 전방향 비젼 시스템의 구현에 관한 연구)

  • 고민수;한영환;이응혁;홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 2001.06e
    • /
    • pp.17-20
    • /
    • 2001
  • This study is regarding the autonomous navigation of the mobile robot which operates through a sensor, the Omnnidirectional Vision System which makes it possible to retrieve the real-time movements of the objects and the walls accessing the robot from all directions and to shorten the processing time. After attempting to extend the field of view by using the reflection system and then learning the point of all directions of 2$\pi$ from the robot at the distance, robot recognizes three-dimensional world through the simple image process, the transform procedure and constant monitoring of the angle and distance from the peripheral obstacles. This study consists of 3 parts: Part 1 regards the process of designing Omnnidirectional Vision System and part 2 the image process, and part 3 estimates the implementation system through the comparative study process and three-dimensional measurements.

  • PDF

A Study on Detection of Object Position and Displacement for Obstacle Recognition of UCT (무인 컨테이너 운반차량의 장애물 인식을 위한 물체의 위치 및 변위 검출에 관한 연구)

  • 이진우;이영진;조현철;손주한;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 1999.10a
    • /
    • pp.321-332
    • /
    • 1999
  • It is important to detect objects movement for obstacle recognition and path searching of UCT(unmanned container transporters) with vision sensor. This paper shows the method to draw out objects and to trace the trajectory of the moving object using a CCD camera and it describes the method to recognize the shape of objects by neural network. We can transform pixel points to objects position of the real space using the proposed viewport. This proposed technique is used by the single vision system based on floor map.

  • PDF

The Design of Controller for Unlimited Track Mobile Robot

  • Park, Han-Soo;Heon Jeong;Park, Sei-Seung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.41.6-41
    • /
    • 2001
  • As autonomous mobile robot become more widely used in industry, the importance of navigation system is rising, But eh primary method of locomotion is with wheels, which cause man problems in controlling tracked mobile robots. In this paper, we discuss the used navigation control of tracked mobile robots with multiple sensors. The multiple sensors are composed of ultrasonic wave sensors and vision sensors. Vision sensors gauge distance using a laser and create visual images, to estimate robot position. The 80196 is used at close range and the vision board is used at long range. Data is managed in the main PC and management is distributed to ever sensor. The controller employs fuzzy logic.

  • PDF

Forest Fire Detection System using Drone Streaming Images (드론 스트리밍 영상 이미지 분석을 통한 실시간 산불 탐지 시스템)

  • Yoosin Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.5
    • /
    • pp.685-689
    • /
    • 2023
  • The proposed system in the study aims to detect forest fires in real-time stream data received from the drone-camera. Recently, the number of wildfires has been increasing, and also the large scaled wildfires are frequent more and more. In order to prevent forest fire damage, many experiments using the drone camera and vision analysis are actively conducted, however there were many challenges, such as network speed, pre-processing, and model performance, to detect forest fires from real-time streaming data of the flying drone. Therefore, this study applied image data processing works to capture five good image frames for vision analysis from whole streaming data and then developed the object detection model based on YOLO_v2. As the result, the classification model performance of forest fire images reached upto 93% of accuracy, and the field test for the model verification detected the forest fire with about 70% accuracy.