• 제목/요약/키워드: Vision navigation

검색결과 314건 처리시간 0.025초

무인차량 적용을 위한 영상 기반의 지형 분류 기법 (Vision Based Outdoor Terrain Classification for Unmanned Ground Vehicles)

  • 성기열;곽동민;이승연;유준
    • 제어로봇시스템학회논문지
    • /
    • 제15권4호
    • /
    • pp.372-378
    • /
    • 2009
  • For effective mobility control of unmanned ground vehicles in outdoor off-road environments, terrain cover classification technology using passive sensors is vital. This paper presents a novel method far terrain classification based on color and texture information of off-road images. It uses a neural network classifier and wavelet features. We exploit the wavelet mean and energy features extracted from multi-channel wavelet transformed images and also utilize the terrain class spatial coordinates of images to include additional features. By comparing the classification performance according to applied features, the experimental results show that the proposed algorithm has a promising result and potential possibilities for autonomous navigation.

단일 카메라 전방향 스테레오 영상 시스템 (Single Camera Omnidirectional Stereo Imaging System)

  • 이수영;최병욱
    • 제어로봇시스템학회논문지
    • /
    • 제15권4호
    • /
    • pp.400-405
    • /
    • 2009
  • A new method for the catadioptric omnidirectional stereo vision with single camera is presented in this paper. The proposed method uses a concave lens with a convex mirror. Since the optical part of the proposed method is simple and commercially available, the resultant omnidirectional stereo system becomes versatile and cost-effective. The closed-form solution for 3D distance computation is presented based on the simple optics including the reflection and the reflection of the convex mirror and the concave lens. The compactness of the system and the simplicity of the image processing make the omnidirectional stereo system appropriate for real-time applications such as autonomous navigation of a mobile robot or the object manipulation. In order to verify the feasibility of the proposed method, an experimental prototype is implemented.

A Study on the Distance Measurement Algorithm using Feature-Based Matching for Autonomous Navigation

  • Song, Hyun-Sung;Lee, Ho-Soon;Jeong, Jun-Ik;Son, Kyung-Hee;Rho, Do-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.63.2-63
    • /
    • 2001
  • It is necessary to distance measurement to detect about obstacles and front vehicles to autonomously navigate. In this paper, we propose an algorithm using stereo vision. It is as follows this algorithm´s procedure. First, It has detected a front vehicle´s common edges from left and right images by image processing. We select number plate of a front vehicle as edges. Then, we estimate distance by triangle measurement method after stereomatching using corner points of the plate´s edges as feature-based points. Experimental results show errors and values compand with experimental ones after set up distance between vehicles in advance.

  • PDF

A Refinement Method for Structure from Stereo Motion

  • Park, Sung-Kee;Kim, Mun-Sang;Kweon, In-So
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.63.6-63
    • /
    • 2001
  • For robot navigation and visual reconstruction , structure from motion (SFM) is an active issue in computer vision community and its properties are also becoming well understood. As a drawback in SFM, it is well known that the SFM methods, using small motion model such as optical flow and direct method, have inevitably motion ambiguity between translation and rotation, which is called bas-relief ambiguity. In this paper based on the robust direct method using stereo image sequence, we present a new method for improving those ambiguities. Basically, the direct method uses nearly all image pixels for estimating motion parameters and depths, and global optimization techniques are adopted for finding its solution ...

  • PDF

A Framework for Cognitive Agents

  • Petitt, Joshua D.;Braunl, Thomas
    • International Journal of Control, Automation, and Systems
    • /
    • 제1권2호
    • /
    • pp.229-235
    • /
    • 2003
  • We designed a family of completely autonomous mobile robots with local intelligence. Each robot has a number of on-board sensors, including vision, and does not rely on global positioning systems The on-board embedded controller is sufficient to analyze several low-resolution color images per second. This enables our robots to perform several complex tasks such as navigation, map generation, or providing intelligent group behavior. Not being limited to playing the game of soccer and being completely autonomous, we are also looking at a number of other interesting scenarios. The robots can communicate with each other, e.g. for exchanging positions, information about objects or just the local states they are currently in (e.g. sharing their current objectives with other robots in the group). We are particularly interested in the differences between a behavior-based approach versus a traditional control algorithm at this still very low level of action.

천정부착 랜드마크 위치와 에지 화소의 이동벡터 정보에 의한 이동로봇 위치 인식 (Mobile Robot Localization using Ceiling Landmark Positions and Edge Pixel Movement Vectors)

  • 진홍신;아디카리 써얌프;김성우;김형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.368-373
    • /
    • 2010
  • A new indoor mobile robot localization method is presented. Robot recognizes well designed single color landmarks on the ceiling by vision system, as reference to compute its precise position. The proposed likelihood prediction based method enables the robot to estimate its position based only on the orientation of landmark.The use of single color landmarks helps to reduce the complexity of the landmark structure and makes it easily detectable. Edge based optical flow is further used to compensate for some landmark recognition error. This technique is applicable for navigation in an unlimited sized indoor space. Prediction scheme and localization algorithm are proposed, and edge based optical flow and data fusing are presented. Experimental results show that the proposed method provides accurate estimation of the robot position with a localization error within a range of 5 cm and directional error less than 4 degrees.

차량정밀측위를 위한 복합측위 기술 동향 (Overview of sensor fusion techniques for vehicle positioning)

  • 박진원;최계원
    • 한국전자통신학회논문지
    • /
    • 제11권2호
    • /
    • pp.139-144
    • /
    • 2016
  • 본 논문에서는 차량정밀측위를 위한 센서융합 기술의 최근 동향에 대해 다룬다. GNSS 만으로는 자율주행에서 요구하는 정밀측위의 정확도 및 신뢰도를 만족시킬 수 없다. 본 논문에서는 GNSS와 주행계, 자이로스코프 등의 관성항법 센서를 결합하는 복합측위 기술을 소개한다. 또한 라이다 및 스테레오 비전에서 탐지된 랜드마크를 정밀지도에 수록된 정보와 매칭시키는 측위 기법의 최근 동향을 소개한다.

TEST OF A LOW COST VEHICLE-BORNE 360 DEGREE PANORAMA IMAGE SYSTEM

  • Kim, Moon-Gie;Sung, Jung-Gon
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2008년도 International Symposium on Remote Sensing
    • /
    • pp.137-140
    • /
    • 2008
  • Recently many areas require wide field of view images. Such as surveillance, virtual reality, navigation and 3D scene reconstruction. Conventional camera systems have a limited filed of view and provide partial information about the scene. however, omni directional vision system can overcome these disadvantages. Acquiring 360 degree panorama images requires expensive omni camera lens. In this study, 360 degree panorama image was tested using a low cost optical reflector which captures 360 degree panoramic views with single shot. This 360 degree panorama image system can be used with detailed positional information from GPS/INS. Through this study result, we show 360 degree panorama image is very effective tool for mobile monitoring system.

  • PDF

DIND Data Fusion with Covariance Intersection in Intelligent Space with Networked Sensors

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제7권1호
    • /
    • pp.41-48
    • /
    • 2007
  • Latest advances in network sensor technology and state of the art of mobile robot, and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. In this study, as the preliminary step for developing a multi-purpose "Intelligent Space" platform to implement advanced technologies easily to realize smart services to human. We will give an explanation for the ISpace system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the DIND data fusion with CI of Intelligent Space. We will conclude by discussing some possible future extensions of ISpace. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions tracking multiple objects, human detection and motion assessment, with the results from the simulations run.

조도를 고려한 표지판 인식 (Traffic Sign Recognition Considering the Intensity of Illumination)

  • 차연화;전창묵;권태범;강성철
    • 로봇학회논문지
    • /
    • 제6권2호
    • /
    • pp.173-181
    • /
    • 2011
  • Recognition of traffic signs helps an unmanned ground vehicle to decide its behavior correctly, and it can reduce traffic accidents. However, low cost traffic sign recognition using a vision sensor is very difficult because the signs are exposed to various illumination conditions. This paper proposes a new approach to solve this problem using an illuminometer which detects the intensity of illumination. Using the intensity of illumination, the recognizer adjusts the parameters for image processing. Therefore, we can reduce the loss of information such as the shape and color of traffic signs. Experimental results show that the proposed method is able to improve the performance of traffic sign recognition in various weather and lighting conditions.