• 제목/요약/키워드: Monocular Vision

검색결과 104건 처리시간 0.03초

Monocular 3D Vision Unit for Correct Depth Perception by Accommodation

  • Hosomi, Takashi;Sakamoto, Kunio;Nomura, Shusaku;Hirotomi, Tetsuya;Shiwaku, Kuninori;Hirakawa, Masahito
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2009년도 9th International Meeting on Information Display
    • /
    • pp.1334-1337
    • /
    • 2009
  • The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  • PDF

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • 제16권1호
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제16권4호
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.

무인수상선의 단일 카메라를 이용한 VFH+ 기반 장애물 회피 기법 (VFH+ based Obstacle Avoidance using Monocular Vision of Unmanned Surface Vehicle)

  • 김태진;최진우;이영준;최현택
    • 한국해양공학회지
    • /
    • 제30권5호
    • /
    • pp.426-430
    • /
    • 2016
  • Recently, many unmanned surface vehicles (USVs) have been developed and researched for various fields such as the military, environment, and robotics. In order to perform purpose specific tasks, common autonomous navigation technologies are needed. Obstacle avoidance is important for safe autonomous navigation. This paper describes a vector field histogram+ (VFH+) based obstacle avoidance method that uses the monocular vision of an unmanned surface vehicle. After creating a polar histogram using VFH+, an open space without the histogram is selected in the moving direction. Instead of distance sensor data, monocular vision data are used for make the polar histogram, which includes obstacle information. An object on the water is recognized as an obstacle because this method is for USV. The results of a simulation with sea images showed that we can verify a change in the moving direction according to the position of objects.

앙안에서 정상 단안시와 약시안의 P-VEP 분석 (The Analysis of the P-VEP on the Normal Monocular Vision and Amblyopia in Binocular)

  • 김덕훈;김규수;성아영;박원학
    • 한국안광학회지
    • /
    • 제10권1호
    • /
    • pp.41-46
    • /
    • 2005
  • 본 연구의 목적은 양안에서 정상 단안시와 약시에서 P-VEP의 파형을 분석하였다. P-VEP는 3채널의 Nicolet system으로 기록하였다. 5명의 성인 피검자(남성 3인, 여성 2명: 평균 22세, 연령은 19세와 24 세 사이)를 기록하였다. 피검자는 전신건강, 약물복용, 유전, 알레르기 그리고 안질환을 포함하는 문진을 조사하였다. 시력과 입체시 검사는 각 피검자의 단안과 양안을 기록하였다. 피검자는 VEP 의 기록을 하는 동안 교정된 시력을 통해서 단안과 양안으로 P-VEP 자극을 본다. 연구의 결과 양안의 시력은 정상 단안 시력보다 좋은 것으로 나타났다. 입체시는 140초 이상을 가졌다. 한편 P-VEP 의 자극을 받은 단안 정상안은 양안으로 본 것과 비교해서 높은 진폭을 나타내었다. 그러나 약시의 파형은 정상 단안시와 양안시에 비해서 상당히 감소함을 가졌다. 결론적으로 본 연구에서는 시력검사에는 양안시가 정상 단안시에 비해서 좋은 시력을 나타내나, P-VEP 검사에서는 오히려 단안 정상 안이 양안에 비해서 높은 진폭을 나타내었다. 그러나 약시안은 시력과 P-VEP 모두 감소함을 나타내었다.

  • PDF

단안카메라를 이용한 항공기의 상대 위치 측정 (Monocular Vision based Relative Position Measurement of an Aircraft)

  • 김정호;이창용;이미현;한동인;이대우
    • 한국항공우주학회지
    • /
    • 제43권4호
    • /
    • pp.289-295
    • /
    • 2015
  • 본 논문은 지상에서 단안 영상센서를 이용하여 항공기의 상대 위치를 측정하는 방법에 대하여 기술하는데, 알고 있는 항공기의 날개전폭과 카메라의 광학 파라미터를 이용하여 상대 거리 및 상대 위치를 측정하는 방법을 제시하였다. 또한 항공기 영상을 추출하기 위하여 차영상 기법을 이용하는 방법을 제시하였다. 이러한 기술은 ILS를 대신할 영상기반 자동착륙 시스템으로 이용될 수 있다. 상대 위치 및 거리 측정 성능을 검증하기 위하여 경비행기와 GPS를 이용하여 성능을 검증하였으며 1.85m의 평균제곱근 오차가 발생함을 확인하였다.

Estimation of Angular Acceleration By a Monocular Vision Sensor

  • Lim, Joonhoo;Kim, Hee Sung;Lee, Je Young;Choi, Kwang Ho;Kang, Sung Jin;Chun, Sebum;Lee, Hyung Keun
    • Journal of Positioning, Navigation, and Timing
    • /
    • 제3권1호
    • /
    • pp.1-10
    • /
    • 2014
  • Recently, monitoring of two-body ground vehicles carrying extremely hazardous materials has been considered as one of the most important national issues. This issue induces large cost in terms of national economy and social benefit. To monitor and counteract accidents promptly, an efficient methodology is required. For accident monitoring, GPS can be utilized in most cases. However, it is widely known that GPS cannot provide sufficient continuity in urban cannons and tunnels. To complement the weakness of GPS, this paper proposes an accident monitoring method based on a monocular vision sensor. The proposed method estimates angular acceleration from a sequence of image frames captured by a monocular vision sensor. The possibility of using angular acceleration is investigated to determine the occurrence of accidents such as jackknifing and rollover. By an experiment based on actual measurements, the feasibility of the proposed method is evaluated.

천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM (Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps)

  • 황서연;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합 (Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot)

  • 김민영;안상태;조형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

레이저포인터와 단일카메라를 이용한 거리측정 시스템 (A Distance Measurement System Using a Laser Pointer and a Monocular Vision Sensor)

  • 전영산;박정근;강태삼;이정욱
    • 한국항공우주학회지
    • /
    • 제41권5호
    • /
    • pp.422-428
    • /
    • 2013
  • 최근에 소형무인기(small UAV)에 대한 관심이 증대되고 있는데, 이는 소형무인기가 비용대비 효율적이고 사람의 접근이 어려운 재난 환경 등에 적합하기 때문이다. 이러한 소형무인기에서 거리측정을 통한 매핑(mapping)은 필수적인 기술이다. 기존의 무인시스템 연구에서 거리 측정 센서는 주로 레이저 센서와 스테레오 비전 센서를 많이 사용하였다. 레이저 센서는 정확도와 신뢰성이 우수하지만 대부분 고가의 장비이고 스테레오 비전 센서는 구현이 용이하지만 무게 측면에서 소형무인기에 탑재하여 사용하기에는 적합하지 않다. 본 논문에서는 레이저 포인터와 단일 카메라를 사용하여 저가의 거리측정기를 구성하는 방안을 소개한다. 제안한 시스템을 이용하여 거리를 측정하고 이로부터 맵을 구성하는 실험을 수행하였고 실제 데이터와 비교 분석하여 시스템의 신뢰성을 검증하였다.