• 제목/요약/키워드: Vision Navigation System

검색결과 194건 처리시간 0.024초

스테레오 기반의 장애물 회피 알고리듬 (Obstacle Avoidance Algorithm using Stereo)

  • 김세선;김현수;하종은
    • 제어로봇시스템학회논문지
    • /
    • 제15권1호
    • /
    • pp.89-93
    • /
    • 2009
  • This paper deals with obstacle avoidance for unmanned vehicle using stereo system. The "DARPA Grand Challenge 2005" shows that the robot can move autonomously under given waypoint. RADAR, IMS (Inertial Measurement System), GPS, camera are used for autonomous navigation. In this paper, we focus on stereo system for autonomous navigation. Our approach is based on Singh et. al. [5]'s approach that is successfully used in an unmanned vehicle and a planetary robot. We propose an improved algorithm for obstacle avoidance by modifying the cost function of Singh et. al. [5]. Proposed algorithm gives more sharp contrast in choosing local path for obstacle avoidance and it is verified in experimental results.

GPS 취약 환경에서 전술급 무인항공기의 주/야간 영상정보를 기반으로 한 실시간 비행체 위치 보정 시스템 개발 (Development of Real-Time Vision Aided Navigation Using EO/IR Image Information of Tactical Unmanned Aerial System in GPS Denied Environment)

  • 최승기;조신제;강승모;이길태;이원근;정길순
    • 한국항공우주학회지
    • /
    • 제48권6호
    • /
    • pp.401-410
    • /
    • 2020
  • 본 연구에서는 전술급 무인항공기의 GPS 신호간섭 및 재밍(Jamming)/기만(Spoofing) 공격 시 위치항법 정보의 취약성을 보완하기 위해 개발한 영상정보 기반 실시간 비행체 위치보정 시스템을 기술하고자 한다. 전술급 무인항공기는 GPS 두절 시 항법장비가 GPS/INS 통합항법에서 DR/AHRS 모드로 전환하여 자동비행이 가능하나, 위치 항법의 경우 대기속도 및 방위각을 활용한 추측항법(DR, Dead Reckoning)으로 인해 시간이 지나면 오차가 누적되어 비행체 위치 파악 및 데이터링크 안테나 자동추적이 제한되는 등의 문제점을 갖고 있다. 이러한 위치 오차의 누적을 최소화하기 위해 영상감지기를 이용한 특정지역 위치보정점을 바탕으로 비행체 자세, 영상감지기 방위각/고각 및 수치지도 데이터(DTED)를 활용하여 비행체 위치를 계산하고 이를 실시간으로 항법장비에 보정하는 시스템을 개발하였다. 또한 GPS 시뮬레이터를 이용한 지상시험과 추측항법 모드의 비행시험으로 영상정보 기반 실시간 비행체 위치보정 시스템의 기능 및 성능을 검증하였다.

LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행 (A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot)

  • 김현우;황요섭;김윤기;이동혁;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제19권11호
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.

원 궤적 경로 기법을 이용한 이동로봇의 주행 (Mobile Robot Navigation Using Circular Path Planning Algorithm)

  • 한성민;이강웅
    • 제어로봇시스템학회논문지
    • /
    • 제15권1호
    • /
    • pp.105-110
    • /
    • 2009
  • In this paper, we proposed a navigation algorithm of the mobile robot for obstacle avoidance using a circular path planning method. The proposed method makes circular paths in order to avoid obstacles in the front side of the mobile robot. An optimal path for approaching to the target is selected and the linear and angular speeds for stable moving of the mobile robot are controlled. Obstacles are detected by image processing which reduce image data obtained from a web camera. Performance of the proposed algorithm is shown by experiments with application to the Pioneer-2DX mobile robot.

GPS를 활용한 Vision/IMU/OBD 시각동기화 기법 (A Time Synchronization Scheme for Vision/IMU/OBD by GPS)

  • 임준후;최광호;유원재;김라우;이유담;이형근
    • 한국항행학회논문지
    • /
    • 제21권3호
    • /
    • pp.251-257
    • /
    • 2017
  • 차량의 정확한 위치 추정을 위하여 GPS (global positioning system)와 영상 센서, 관성 센서 등을 결합한 복합 측위에 대한 연구가 활발히 진행되고 있다. 본 논문에서는 복합 측위에 있어 중요한 요소 중 하나인 각 센서 간의 시각동기화 기법을 제안한다. 제안된 기법은 GPS 시각 정보를 기준으로 시각동기화 된 영상 센서, 관성 센서와 OBD (on-board diagnostics) 측정치를 획득하는 기법이다. GPS로부터 시각 정보와 위치 정보를 획득하며, 관성 센서로부터 차량의 자세에 관련된 측정치와 OBD를 활용하여 차량의 속력을 획득한다. 영상 센서로부터 획득한 영상에 GPS 시각 정보와 위치 정보, 관성 센서와 OBD의 측정치를 색상으로 변환하여 영상 픽셀에 삽입하는 기법을 제안한다. 또한, 영상에 삽입된 시각동기화 된 센서 측정치들은 변환 과정을 통하여 추출할 수 있다. 각 센서들의 결합을 위하여 임베디드 리눅스 보드를 활용하였으며, 제안된 기법의 성능 평가를 위하여 실제 차량 주행을 통한 실험을 수행하였다.

DGPS와 기계시각을 이용한 자율주행 콤바인의 개발 (Development of Autonomous Combine Using DGPS and Machine Vision)

  • 조성인;박영식;최창현;황헌;김명락
    • Journal of Biosystems Engineering
    • /
    • 제26권1호
    • /
    • pp.29-38
    • /
    • 2001
  • A navigation system was developed for autonomous guidance of a combine. It consisted of a DGPS, a machine vision system, a gyro sensor and an ultrasonic sensor. For an autonomous operation of the combine, target points were determined at first. Secondly, heading angle and offset were calculated by comparing current positions obtained from the DGPS with the target points. Thirdly, the fuzzy controller decided steering angle by the fuzzy inference that took 3 inputs of heading angle, offset and distance to the bank around the rice field. Finally, the hydraulic system was actuated for the combine steering. In the case of the misbehavior of the DGPS, the machine vision system found the desired travel path. In this way, the combine traveled straight paths to the traget point and then turned to the next target point. The gyro sensor was used to check the turning angle. The autonomous combine traveled within 31.11cm deviation(RMS) on the straight paths and harvested up to 96% of the whole rice field. The field experiments proved a possibility of autonomous harvesting. Improvement of the DGPS accuracy should be studied further by compensation variations of combines attitude due to unevenness of the rice field.

  • PDF

Simultaneous Localization and Mobile Robot Navigation using a Sensor Network

  • Jin Tae-Seok;Bashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권2호
    • /
    • pp.161-166
    • /
    • 2006
  • Localization of mobile agent within a sensing network is a fundamental requirement for many applications, using networked navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems, This paper describes a networked sensor-based navigation method in an indoor environment for an autonomous mobile robot which can navigate and avoid obstacle. In this method, the self-localization of the robot is done with a model-based vision system using networked sensors, and nonstop navigation is realized by a Kalman filter-based STSF(Space and Time Sensor Fusion) method. Stationary obstacles and moving obstacles are avoided with networked sensor data such as CCD camera and sonar ring. We will report on experiments in a hallway using the Pioneer-DX robot. In addition to that, the localization has inevitable uncertainties in the features and in the robot position estimation. Kalman filter scheme is used for the estimation of the mobile robot localization. And Extensive experiments with a robot and a sensor network confirm the validity of the approach.

비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어 (Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation)

  • 진태석;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제11권4호
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

Constrained High Accuracy Stereo Reconstruction Method for Surgical Instruments Positioning

  • Wang, Chenhao;Shen, Yi;Zhang, Wenbin;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권10호
    • /
    • pp.2679-2691
    • /
    • 2012
  • In this paper, a high accuracy stereo reconstruction method for surgery instruments positioning is proposed. Usually, the problem of surgical instruments reconstruction is considered as a basic task in computer vision to estimate the 3-D position of each marker on a surgery instrument from three pairs of image points. However, the existing methods considered the 3-D reconstruction of the points separately thus ignore the structure information. Meanwhile, the errors from light variation, imaging noise and quantization still affect the reconstruction accuracy. This paper proposes a method which takes the structure information of surgical instruments as constraints, and reconstructs the whole markers on one surgical instrument together. Firstly, we calibrate the instruments before navigation to get the structure parameters. The structure parameters consist of markers' number, distances between each markers and a linearity sign of each instrument. Then, the structure constraints are added to stereo reconstruction. Finally, weighted filter is used to reduce the jitter. Experiments conducted on surgery navigation system showed that our method not only improve accuracy effectively but also reduce the jitter of surgical instrument greatly.

교통 표지판의 3차원 추적 경로를 이용한 자동차의 주행 차로 추정 (Lane-Level Positioning based on 3D Tracking Path of Traffic Signs)

  • 박순용;김성주
    • 로봇학회논문지
    • /
    • 제11권3호
    • /
    • pp.172-182
    • /
    • 2016
  • Lane-level vehicle positioning is an important task for enhancing the accuracy of in-vehicle navigation systems and the safety of autonomous vehicles. GPS (Global Positioning System) and DGPS (Differential GPS) are generally used in navigation service systems, which however only provide an accuracy level up to 2~3 m. In this paper, we propose a 3D vision based lane-level positioning technique which can provides accurate vehicle position. The proposed method determines the current driving lane of a vehicle by tracking the 3D position of traffic signs which stand at the side of the road. Using a stereo camera, the 3D tracking paths of traffic signs are computed and their projections to the 2D road plane are used to determine the distance from the vehicle to the signs. Several experiments are performed to analyze the feasibility of the proposed method in many real roads. According to the experimental results, the proposed method can achieve 90.9% accuracy in lane-level positioning.