• 제목/요약/키워드: Camera Geometry

검색결과 202건 처리시간 0.299초

수평 이동식 스테레오 카메라의 초점을 이용한 주시각 제어 연구 (Vergence control of horizontal moving axis stereo camera using lens focusing)

  • 박순용;최영수;이용범
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.403-406
    • /
    • 1996
  • In this paper, the geometry between horizontal and vertical movement of lens is studied for automatic vergence control of horizontal moving axis stereo camera. When the disparity of stereo remains contant, the horizontal movement of camera lens for image disparity and the vertical movement for image focus have linear geometry. Using this linearity, we can control the vergence of stereo camera only by focusing of stereo camera lens.

  • PDF

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권6호
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

Determination of Epipolar Geometry for High Resolution Satellite Images

  • Noh Myoung-Jong;Cho Woosug
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2004년도 Proceedings of ISRS 2004
    • /
    • pp.652-655
    • /
    • 2004
  • The geometry of satellite image captured by linear pushbroom scanner is different from that of frame camera image. Since the exterior orientation parameters for satellite image will vary scan line by scan line, the epipolar geometry of satellite image differs from that of frame camera image. As we know, 2D affine orientation for the epipolar image of linear pushbroom scanners system are well-established by using the collinearity equation (Testsu Ono, 1999). Also, another epipolar geometry of linear pushbroom scanner system is recently established by Habib(2002). He reported that the epipolar geometry of linear push broom satellite image is realized by parallel projection based on 2D affine models. Here, in this paper, we compared the Ono's method with Habib's method. In addition, we proposed a method that generates epipolar resampled images. For the experiment, IKONOS stereo images were used in generating epipolar images.

  • PDF

On Design of Visual Servoing using an Uncalibrated Camera in 3D Space

  • Morita, Masahiko;Kenji, Kohiyama;Shigeru, Uchikado;Lili, Sun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1121-1125
    • /
    • 2003
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. We use a pinhole camera model as the camera one. The essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. These play an important role in designing visual servoing. For easy understanding of the proposed method we first show a design in case of the calibrated camera. The design is constructed by 4 steps and the directional motion of the robot arm is fixed only to a constant direction. This means that an estimated epipole denotes the direction, to which the robot arm translates in 3D space, on the image plane.

  • PDF

머신비젼으로 패턴 인식기법에 의한 엔드밀 마모 검출에 관한 연구 (A Study on the End Mill Wear Detection by the Pattern Recognition Method in the Machine Vision)

  • 이창희;조택동
    • 한국정밀공학회지
    • /
    • 제20권4호
    • /
    • pp.223-229
    • /
    • 2003
  • Tool wear monitoring is an important technique in the flexible manufacturing system. This paper studies the end mill wear detection using CCD camera and pattern recognition method. When the end mill working in the machining center, the bottom edge of the end mill geometry change, this information is used. The CCD camera grab the new and worn tool geometry and the area of the tool geometry was compared. In this result, when the values of the subtract worn tool from new tool end in 200 pixels, it decides the tool life. This paper proposed the new method of the end mill wear detection.

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

  • Kim, Jong-Cheol;Suga, Yasuo
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권6호
    • /
    • pp.663-673
    • /
    • 2007
  • This paper presents a new moving obstacle detection method using an optical flow in mobile robot with an omnidirectional camera. Because an omnidirectional camera consists of a nonlinear mirror and CCD camera, the optical flow pattern in omnidirectional image is different from the pattern in perspective camera. The geometry characteristic of an omnidirectional camera has influence on the optical flow in omnidirectional image. When a mobile robot with an omnidirectional camera moves, the optical flow is not only theoretically calculated in omnidirectional image, but also investigated in omnidirectional and panoramic images. In this paper, the panoramic image is generalized from an omnidirectional image using the geometry of an omnidirectional camera. In particular, Focus of expansion (FOE) and focus of contraction (FOC) vectors are defined from the estimated optical flow in omnidirectional and panoramic images. FOE and FOC vectors are used as reference vectors for the relative evaluation of optical flow. The moving obstacle is turned out through the relative evaluation of optical flows. The proposed algorithm is tested in four motions of a mobile robot including straight forward, left turn, right turn and rotation. The effectiveness of the proposed method is shown by the experimental results.

스테레오 영상을 이용한 자기보정 및 3차원 형상 구현 (3D Reconstruction and Self-calibration based on Binocular Stereo Vision)

  • 후영영;정경석
    • 한국산학기술학회논문지
    • /
    • 제13권9호
    • /
    • pp.3856-3863
    • /
    • 2012
  • 스테레오 영상으로부터 3차원 형상을 구현함에 있어 사용자의 개입을 최소로 필요로 하는 기법을 개발하였다. 형상구현은 특정 기하학 그룹을 평가하는 3단계로 이루어진다. 1단계는 영상에 존재하는 epipolar 기하 평가로 각 영상에서의 특정점들을 일치시킨다. 2단계는 소실점 방법을 이용하여 투영공간에서 특정평면을 찾는 affine 기하 평가이다. 3단계에서는 카메라의 자기보정을 포함하며 3차원 모델이 얻어질 수 있는 계량 기하 변수를 구한다. 이 방법의 장점은 형상구현을 위해 스테레오 영상을 보정할 필요가 없는 것으로, 그 구현가능성을 실증하였다.

비유클리드공간 정보를 사용하는 증강현실 (Augmented Reality Using Projective Information)

  • 서용덕;홍기상
    • 방송공학회논문지
    • /
    • 제4권2호
    • /
    • pp.87-102
    • /
    • 1999
  • 가상의 삼 차원 컴퓨터 그래픽 영상을 실제 비디오 영상과 합성하는 증강현실을 구현하기 위해서는 카메라의 초점거리 같은 내부변수와 카메라가 어떻게 움직였는지를 나타내는 회전 및 직선 운동에 대한 이동정보가 반드시 필요하다. 따라서, 기존의 방법들은 미리 카메라의 내부변수를 계산해 둔 다음, 실제 영상에서 얻어지는 정보를 이용하여 카메라의 이동정보를 계산하거나, 실제 영상에 카메라 보정을 위한 삼 차원 보정 패턴이 보이도록 한다음 영상에서 그 패턴의 형태를 분석하여 카메라의 내부변수와 운동정보를 동시에 계산하는 방법을 사용하였다. 이 논문에서는 실제 영상에서 얻어지는 정합점들로부터 카메라 조정없이 구할 수 있는 투영기하공간 카메라 이동정보를 이용하여 증강현실을 구현하는 방법을 제안한다. 실제 카메라의 내부변수와 유클리드공간 이동정보를 대신하기 위하여 가상카메라를 정의하며, 가상카메라는 실제공간과 가상 그래픽 공간의 연결을 위하여 두 장의 영상에 사용자가 삽입하는 가상공간 좌표계의 기준점들의 영상좌표로부터 구해진다. 제안하는 방법은 카메라의 내부변수에 대한 정보를 따로 구할 필요가 없으며 컴퓨터 그래픽이 지원하는 모든 기능을 비유클리드공간의 정보로도 구현이 가능하다는 것을 보여준다.

  • PDF

비교정 영상으로부터 왜곡을 제거한 3 차원 재구성방법 (3D reconstruction method without projective distortion from un-calibrated images)

  • 김형률;김호철;오장석;구자민;김민기
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.391-394
    • /
    • 2005
  • In this paper, we present an approach that is able to reconstruct 3 dimensional metric models from un-calibrated images acquired by a freely moved camera system. If nothing is known of the calibration of either camera, nor the arrangement of one camera which respect to the other, then the projective reconstruction will have projective distortion which expressed by an arbitrary projective transformation. The distortion on the reconstruction is removed from projection to metric through self-calibration. The self-calibration requires no information about the camera matrices, or information about the scene geometry. Self-calibration is the process of determining internal camera parameters directly from multiply un-calibrated images. Self-calibration avoids the onerous task of calibrating cameras which needs to use special calibration objects. The root of the method is setting a uniquely fixed conic(absolute quadric) in 3D space. And it can make possible to figure out some way from the images. Once absolute quadric is identified, the metric geometry can be computed. We compared reconstruction image from calibrated images with the result by self-calibration method.

  • PDF