• 제목/요약/키워드: Vision Navigation System

검색결과 194건 처리시간 0.022초

해파리 퇴치용 자율 수상 로봇의 설계 및 구현 (Design and Implementation of Unmanned Surface Vehicle JEROS for Jellyfish Removal)

  • 김동훈;신재욱;김형진;김한근;이동화;이승목;명현
    • 로봇학회논문지
    • /
    • 제8권1호
    • /
    • pp.51-57
    • /
    • 2013
  • Recently, the number of jellyfish has been rapidly grown because of the global warming, the increase of marine structures, pollution, and etc. The increased jellyfish is a threat to the marine ecosystem and induces a huge damage to fishery industries, seaside power plants, and beach industries. To overcome this problem, a manual jellyfish dissecting device and pump system for jellyfish removal have been developed by researchers. However, the systems need too many human operators and their benefit to cost is not so good. Thus, in this paper, the design, implementation, and experiments of autonomous jellyfish removal robot system, named JEROS, have been presented. The JEROS consists of an unmanned surface vehicle (USV), a device for jellyfish removal, an electrical control system, an autonomous navigation system, and a vision-based jellyfish detection system. The USV was designed as a twin hull-type ship, and a jellyfish removal device consists of a net for gathering jellyfish and a blades-equipped propeller for dissecting jellyfish. The autonomous navigation system starts by generating an efficient path for jellyfish removal when the location of jellyfish is received from a remote server or recognized by a vision system. The location of JEROS is estimated by IMU (Inertial Measurement Unit) and GPS, and jellyfish is eliminated while tracking the path. The performance of the vision-based jellyfish recognition, navigation, and jellyfish removal was demonstrated through field tests in the Masan and Jindong harbors in the southern coast of Korea.

영상 기반 항법을 위한 가우시안 혼합 모델 기반 파티클 필터 (Particle Filters using Gaussian Mixture Models for Vision-Based Navigation)

  • 홍경우;김성중;방효충;김진원;서일원;박장호
    • 한국항공우주학회지
    • /
    • 제47권4호
    • /
    • pp.274-282
    • /
    • 2019
  • 무인항공기의 영상 기반 항법은 널리 사용되는 GPS/INS 통합 항법 시스템의 취약점을 보강할 수 있는 중요한 기술로 이에 대한 연구가 활발히 이루어지고 있다. 하지만 일반적인 영상 대조 기법은 실제 항공기 비행 상황들을 적절하게 고려하기 힘들다는 단점이 있다. 따라서 본 논문에서는 영상기반 항법을 위한 가우시안 혼합 모델 기반의 파티클 필터를 제안한다. 제안한 파티클 필터는 영상과 데이터베이스를 가우시안 혼합 모델로 가정하여 둘 간의 유사도를 이용하여 항체의 위치를 추정한다. 또한 몬테카를로 시뮬레이션을 통해 위치 추정 성능을 확인한다.

다중카메라와 레이저스캐너를 이용한 확장칼만필터 기반의 노면인식방법 (Road Recognition based Extended Kalman Filter with Multi-Camera and LRF)

  • 변재민;조용석;김성훈
    • 로봇학회논문지
    • /
    • 제6권2호
    • /
    • pp.182-188
    • /
    • 2011
  • This paper describes a method of road tracking by using a vision and laser with extracting road boundary (road lane and curb) for navigation of intelligent transport robot in structured road environments. Road boundary information plays a major role in developing such intelligent robot. For global navigation, we use a global positioning system achieved by means of a global planner and local navigation accomplished with recognizing road lane and curb which is road boundary on the road and estimating the location of lane and curb from the current robot with EKF(Extended Kalman Filter) algorithm in the road assumed that it has prior information. The complete system has been tested on the electronic vehicles which is equipped with cameras, lasers, GPS. Experimental results are presented to demonstrate the effectiveness of the combined laser and vision system by our approach for detecting the curb of road and lane boundary detection.

광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템 (3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor)

  • 조영진;오현민;김민영
    • 제어로봇시스템학회논문지
    • /
    • 제22권8호
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

이동로봇의 자율주행을 위한 전방향 비젼 시스템의 구현에 관한 연구 (A Study on the Construction of Omnidirecional Vision System for the Mobile Robot's the Autonomous Navigation)

  • 고민수;한영환;이응혁;홍승홍
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(5)
    • /
    • pp.17-20
    • /
    • 2001
  • This study is regarding the autonomous navigation of the mobile robot which operates through a sensor, the Omnnidirectional Vision System which makes it possible to retrieve the real-time movements of the objects and the walls accessing the robot from all directions and to shorten the processing time. After attempting to extend the field of view by using the reflection system and then learning the point of all directions of 2$\pi$ from the robot at the distance, robot recognizes three-dimensional world through the simple image process, the transform procedure and constant monitoring of the angle and distance from the peripheral obstacles. This study consists of 3 parts: Part 1 regards the process of designing Omnnidirectional Vision System and part 2 the image process, and part 3 estimates the implementation system through the comparative study process and three-dimensional measurements.

  • PDF

A Hybrid Positioning System for Indoor Navigation on Mobile Phones using Panoramic Images

  • Nguyen, Van Vinh;Lee, Jong-Weon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권3호
    • /
    • pp.835-854
    • /
    • 2012
  • In this paper, we propose a novel positioning system for indoor navigation which helps a user navigate easily to desired destinations in an unfamiliar indoor environment using his mobile phone. The system requires only the user's mobile phone with its basic equipped sensors such as a camera and a compass. The system tracks user's positions and orientations using a vision-based approach that utilizes $360^{\circ}$ panoramic images captured in the environment. To improve the robustness of the vision-based method, we exploit a digital compass that is widely installed on modern mobile phones. This hybrid solution outperforms existing mobile phone positioning methods by reducing the error of position estimation to around 0.7 meters. In addition, to enable the proposed system working independently on mobile phone without the requirement of additional hardware or external infrastructure, we employ a modified version of a fast and robust feature matching scheme using Histogrammed Intensity Patch. The experiments show that the proposed positioning system achieves good performance while running on a mobile phone with a responding time of around 1 second.

REPRESENTATION OF NAVIGATION INFORMATION FOR VISUAL CAR NAVIGATION SYSTEM

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2007년도 Proceedings of ISRS 2007
    • /
    • pp.508-511
    • /
    • 2007
  • Car navigation system is one of the most important applications in telematics. A newest trend of car navigation system is using real video captured by camera equipped on the vehicle, because video can overcome the semantic gap between map and real world. In this paper, we suggest a visual car navigation system that visually represents navigation information or route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid on it. Main services of the visual car navigation system are graphical turn guidance and lane change guidance. We suggest the system architecture that implements the services by integrating conventional route finding and guidance, computer vision functions, and augmented reality display functions. What we designed as a core part of the system is visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to a determination rule based on current location and driving circumstances. We briefly show the implementation of system.

  • PDF

비전센서와 INS 기반의 항법 시스템 구현 시 랜드마크 사용에 따른 가관측성 분석 (Observability Analysis of a Vision-INS Integrated Navigation System Using Landmark)

  • 원대희;천세범;성상경;조진수;이영재
    • 한국항공우주학회지
    • /
    • 제38권3호
    • /
    • pp.236-242
    • /
    • 2010
  • 위성항법시스템과 INS가 결합된 항법 시스템은 가용 위성이 없는 경우 항법 정보를 제공하지 못하는 단점이 있다. 이러한 단점을 극복하기 위해 비전센서를 결합한 항법 시스템이 대안으로 사용되지만, 일반적으로 특징점만 사용하여 항법을 수행하므로 가관측성이 부족한 문제점이 존재한다. 이때 사전에 위치가 알려져 있는 랜드마크를 추가적으로 사용하면 특징점만 이용하는 경우에 비해 가관측성을 향상 시킬 수 있다. 본 논문에서는 추가적인 랜드마크를 사용하는 경우에 대하여, TOM/SOM 분석과 고유치 분석을 통해 가관측성 향상 정도를 분석하였다. 시뮬레이션 결과 특징점만 사용하는 경우 항상 가관측성이 부족하나 랜드마크를 사용하는 경우 2번째 갱신과정 이후에는 완전 가관측한 특성을 보였다. 따라서 랜드마크를 사용하면 가관측성이 향상되므로 전체 시스템 성능을 향상시킬수 있다.

전파간섭환경에서 UAV를 활용한 선박의 백업항법시스템 설계 (Design for Back-up of Ship's Navigation System using UAV in Radio Frequency Interference Environment)

  • 박슬기;손표웅
    • 한국항행학회논문지
    • /
    • 제23권4호
    • /
    • pp.289-295
    • /
    • 2019
  • 국제항로표지협회에서는 해양 분야에서 활용하는 백업항법시스템의 경우 항만 입출항시 10 m의 수평정확도를 보장하도록 요구하고 있다. 대표적인 해양분야의 백업항법시스템인 eLoran은 10 m 이내의 수평 정확도를 만족함이 증명되었지만, 수신환경에 따라 항법성능이 저하되기도 한다. 특히 수신 안테나 주변의 잡음 및 멀티패스 등으로 인한 요인으로 인해 특정 상황에서는 항법 자체가 불가능해지기도 한다. 본 논문에서는 이러한 환경에서 항만입출항 조건의 수평정확도 요구성능을 만족하기 위하여 UAV(unmanned aerial vehicles)를 활용한 선박의 백업항법시스템을 설계하였다. eLoran 신호 수신에 영향을 주는 주변 환경의 영향을 감소시키기 위하여 UAV에 카메라, IMU센서, eLoran 안테나 및 수신기를 장착하였으며, 선박의 안테나보다 높은 곳에서 카메라를 이용하여 랜드마크를 추적하고 일정 범위 내에서 eLoran 신호 수신과 위치 및 자세를 제어하도록 설계하였다. 선박에서는 UAV로부터 수신한 영상 및 자세 정보와 eLoran 신호를 이용한 선박기반 측위결과를 이용해 항만 입출항 시 수평정확도 요구성능을 만족할 수 있다.

컴퓨터 비전과 GPS를 이용한 드론 자율 비행 알고리즘 (Autonomous-flight Drone Algorithm use Computer vision and GPS)

  • 김정환;김식
    • 대한임베디드공학회논문지
    • /
    • 제11권3호
    • /
    • pp.193-200
    • /
    • 2016
  • This paper introduces an algorithm to middle-low price drone's autonomous navigation flight system using computer vision and GPS. Existing drone operative system mainly contains using methods such as, by inputting course of the path to the installed software of the particular drone in advance of the flight or following the signal that is transmitted from the controller. However, this paper introduces new algorithm that allows autonomous navigation flight system to locate specific place, specific shape of the place and specific space in an area that the user wishes to discover. Technology developed for military industry purpose was implemented on a lower-quality hobby drones without changing its hardware, and used this paper's algorithm to maximize the performance. Camera mounted on middle-low price drone will process the image which meets user's needs will look through and search for specific area of interest when the user inputs certain image of places it wishes to find. By using this algorithm, middle-low price drone's autonomous navigation flight system expect to be apply to a variety of industries.