• Title/Summary/Keyword: Image Navigation

Search Result 705, Processing Time 0.027 seconds

Camera Calibration for Machine Vision Based Autonomous Vehicles (머신비젼 기반의 자율주행 차량을 위한 카메라 교정)

  • Lee, Mun-Gyu;An, Taek-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

Development of a Hover-capable AUV System for In-water Visual Inspection via Image Mosaicking (영상 모자이킹을 통한 수중 검사를 위한 호버링 타입 AUV 시스템 개발)

  • Hong, Seonghun;Park, Jeonghong;Kim, Taeyun;Yoon, Sukmin;Kim, Jinwhan
    • Journal of Ocean Engineering and Technology
    • /
    • v.30 no.3
    • /
    • pp.194-200
    • /
    • 2016
  • Recently, UUVs (unmanned underwater vehicles) have increasingly been applied in various science and engineering applications. In-water inspection, which used to be performed by human divers, is a potential application for UUVs. In particular, the operational safety and performance of in-water inspection missions can be greatly improved by using an underwater robotic vehicle. The capabilities of hovering maneuvers and automatic image mosaicking are essential for autonomous underwater visual inspection. This paper presents the development of a hover-capable autonomous underwater vehicle system for autonomous in-water inspection, which includes both a hardware platform and operational software algorithms. Some results from an experiment in a model basin are presented to demonstrate the feasibility of the developed system and algorithms.

Automated Geometric Correction based on Robust Estimation with Geostationary Weather Satellite Image (강인추정 기법에 기반한 정지궤도 기상위성영상의 자동 기하보정)

  • Lee, Tae-Yoon;Ahn, Myoung-Hwan;Oh, Hyun-Jong
    • Proceedings of the KSRS Conference
    • /
    • 2007.03a
    • /
    • pp.161-166
    • /
    • 2007
  • Multi-functional Transport Satellite lR(MTSAT-lR)과 같은 정지궤도 기상위성의 지상 전처리 과정에는 영상위치보정(Image navigation and registration)이 포함된다. 영상위치보정은 위성 영상의 기하학적인 왜곡을 보정하는 과정이다. 랜드마크를 이용하는 영상위치보정 과정은 랜드마크 결정과 센서 모델 추정, 리샘플링(Resampling)의 세 가지 단계로 나눌 수 있다. MTSAT-1R의 High Resolution Image Data(HiRID)는 이미 영상위치보정이 수행되었지만, 기하학적인 오차가 남아있는 영상을 포함하기도 한다. 본 연구에서는 이런 기하학적인 오차를 제거하기 위해서 강인추정 기법에 기반한 기하보정을 수행하였다. 이태윤 등 (2005)은 강인추정 기법과 Direct Linear Transformation (DLT)에 기반한 오정합 판별 방법을 제안하였다. 이 판별 방법을 적용하여 추정된 DLT로 MTSAT-1R 영상의 기하보정을 수행한 결과에는 향상된 정확도로 기하보정 된 영상 뿐만 아니라 비교적 큰 오차를 포함하는 영상도 있었다. 이를 해결하기 위해서 본 연구에서는 강인추정 기법과 Affine 변환을 이용한 방법을 적용하였다. 본 연구에서는 기준 해안선에서 추출한 1,407개의 랜드마크와 8개의 MTSAT-1R 영상을 이용하였으며,강인추정 기법에 DLT를 적용한 방법과 Affine 변환을 적용한 방법으로 자동 기하보정을 수행하여 그 결과를 비교하였다. 또한 강인추정 기볍 중 RANSAC과 MSAC의 적용 결과를 비교하여 보았다. 그 결과,DLT로 기하보정 시,본 논문에서 제안된 방법이 강인추정 기법에 DLT를 적용한 방법 보다 더 좋은 성능을 보여주었다.

  • PDF

Autonomous Traveling of Unmanned Golf-Car using GPS and Vision system (GPS와 비전시스템을 이용한 무인 골프카의 자율주행)

  • Jung, Byeong Mook;Yeo, In-Joo;Cho, Che-Seung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.26 no.6
    • /
    • pp.74-80
    • /
    • 2009
  • Path tracking of unmanned vehicle is a basis of autonomous driving and navigation. For the path tracking, it is very important to find the exact position of a vehicle. GPS is used to get the position of vehicle and a direction sensor and a velocity sensor is used to compensate the position error of GPS. To detect path lines in a road image, the bird's eye view transform is employed, which makes it easy to design a lateral control algorithm simply than from the perspective view of image. Because the driving speed of vehicle should be decreased at a curved lane and crossroads, so we suggest the speed control algorithm used GPS and image data. The control algorithm is simulated and experimented from the basis of expert driver's knowledge data. In the experiments, the results show that bird's eye view transform are good for the steering control and a speed control algorithm also shows a stability in real driving.

Position Improvement of a Human-Following Mobile Robot Using Image Information of Walking Human (보행자의 영상정보를 이용한 인간추종 이동로봇의 위치 개선)

  • Jin Tae-Seok;Lee Dong-Heui;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.398-405
    • /
    • 2005
  • The intelligent robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robots need to recognize their position and posture in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for a robot to estimate of his position by solving uncertainty for mobile robot navigation, as one of the best important problems. In this paper, we describe a method for the localization of a mobile robot using image information of a moving object. This method combines the observed position from dead-reckoning sensors and the estimated position from the images captured by a fixed camera to localize a mobile robot. Using a priori known path of a moving object in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a moving object and the estimated robot's position. Also, the control method is proposed to estimate position and direction between the walking human and the mobile robot, and the Kalman filter scheme is used for the estimation of the mobile robot localization. And its performance is verified by the computer simulation and the experiment.

Functional Requirements to Develop the Marine Navigation Supporting System for Northern Sea Route (북극해 안전운항 지원시스템 구축을 위한 기능적 요구조건 도출)

  • Hong, Sung Chul;Kim, Sun Hwa;Yang, Chan Su
    • Spatial Information Research
    • /
    • v.22 no.5
    • /
    • pp.19-26
    • /
    • 2014
  • International attention on the Northern Sea Route has been increased as the decreased sea-ice extents in Northern Sea raise the possibility to develop new sea routes and natural resources. However, to protect ships' safety and pristine environments in polar waters, International Maritime Organization(IMO) has been developing the Polar Code to regulate polar shipping. The marine navigation supporting system is essential for ships traveling long distance in the Northern Sea as they are affected by ocean weather and sea-ice. Therefore, to cope with the IMO Polar Code, this research proposes the functional requirements to develop the marine navigation supporting system for the Northern Sea Route. The functional requirements derived from the IMO Polar code consist of arctic voyage risk map, arctic voyage planning and MSI(Marine Safety Information) methods, based on which the navigation supporting system is able to provide dynamic and safe-economical sea route service using the sea-ice observation and prediction technologies. Also, a requirement of the system application is derived to apply the marine navigation supporting system for authorizing ships operating in the Northern Sea. To reflect the proposed system in the Polar Code, continual international exchange and policy proposals are necessary along with the development of sea-ice observation and prediction technologies.

Robust Real-Time Lane Detection in Luminance Variation Using Morphological Processing (형태학적 처리를 이용한 밝기 변화에 강인한 실시간 차선 검출)

  • Kim, Kwan-Young;Kim, Mi-Rim;Kim, In-Kyu;Hwang, Seung-Jun;Beak, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.6
    • /
    • pp.1101-1108
    • /
    • 2012
  • In this paper, we proposed an algorithm for real-time lane detecting against luminance variation using morphological image processing and edge-based region segmentation. In order to apply the most appropriate threshold value, the adaptive threshold was used in every frame, and perspective transform was applied to correct image distortion. After that, we designated ROI for detecting the only lane and established standard to limit region of ROI. We compared performance about the accuracy and speed when we used morphological method and do not used. Experimental result showed that the proposed algorithm improved the accuracy to 98.8% of detection rate and speed of 36.72ms per frame with the morphological method.

Measurement Algorithm of Vehicle Speed Using Real-Time Image Processing (영상의 실시간 처리에 의한 차량 속도의 계측 알고리즘)

  • Seo, Jeong-Goo;Lee, Jeong-Goo;Yun, Tae-Won;Hwang, Byong-Won
    • Journal of Advanced Navigation Technology
    • /
    • v.9 no.1
    • /
    • pp.10-18
    • /
    • 2005
  • These studies developed system as well as its algorithm which can measure traffic flow and vehicle speed on the highway as well as road by using industrial television(ITV) system. This algorithm used the real time processing of dynamic images. The processing algorithm of dynamic images is developed and proved its validity by frame grabber. Frame grabber can process the information of a small number of sample points only instead of the whole pixel of the images. In the techniques of this algorithm, we made approximate contour of vehicle by allocating sampling points in cross-direction of image, and recognized top of contour of vehicle. Applying these technique, we measured the number of passing vehicles of one lane as well as multilane. Speed of each vehicle is measured by computing the time difference between a pair of sample points on two sample points lines.

  • PDF

Unmanned Ground Vehicle Control and Modeling for Lane Tracking and Obstacle Avoidance (충돌회피 및 차선추적을 위한 무인자동차의 제어 및 모델링)

  • Yu, Hwan-Shin;Kim, Sang-Gyum
    • Journal of Advanced Navigation Technology
    • /
    • v.11 no.4
    • /
    • pp.359-370
    • /
    • 2007
  • Lane tracking and obstacle avoidance are considered two of the key technologies on an unmanned ground vehicle system. In this paper, we propose a method of lane tracking and obstacle avoidance, which can be expressed as vehicle control, modeling, and sensor experiments. First, obstacle avoidance consists of two parts: a longitudinal control system for acceleration and deceleration and a lateral control system for steering control. Each system is used for unmanned ground vehicle control, which notes the vehicle's location, recognizes obstacles surrounding it, and makes a decision how fast to proceed according to circumstances. During the operation, the control strategy of the vehicle can detect obstacle and perform obstacle avoidance on the road, which involves vehicle velocity. Second, we explain a method of lane tracking by means of a vision system, which consists of two parts: First, vehicle control is included in the road model through lateral and longitudinal control. Second, the image processing method deals with the lane tracking method, the image processing algorithm, and the filtering method. Finally, in this paper, we propose a method for vehicle control, modeling, lane tracking, and obstacle avoidance, which are confirmed through vehicles tests.

  • PDF

3D Image Mergence using Weighted Bipartite Matching Method based on Minimum Distance (최소 거리 기반 가중치 이분 분할 매칭 방법을 이용한 3차원 영상 정합)

  • Jang, Taek-Jun;Joo, Ki-See;Jang, Bog-Ju;Kang, Kyeang-Yeong
    • Journal of Advanced Navigation Technology
    • /
    • v.12 no.5
    • /
    • pp.494-501
    • /
    • 2008
  • In this paper, to merge whole 3D information of an occluded body from view point, the new image merging algorithm is introduced after obtaining images of body on the turn table from 4 directions. The two images represented by polygon meshes are merged using weight bipartite matching method with different weights according to coordinates and axes based on minimum distance since two images merged don't present abrupt variation of 3D coordinates and scan direction is one direction. To obtain entire 3D information of body, these steps are repeated 3 times since the obtained images are 4. This proposed method has advantage 200 - 300% searching time reduction rather than conventional branch and bound, dynamic programming, and hungarian method though the matching accuracy rate is a little bit less than these methods.

  • PDF