• Title/Summary/Keyword: Camera Geometry

Search Result 202, Processing Time 0.032 seconds

Vergence control of horizontal moving axis stereo camera using lens focusing (수평 이동식 스테레오 카메라의 초점을 이용한 주시각 제어 연구)

  • 박순용;최영수;이용범
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.403-406
    • /
    • 1996
  • In this paper, the geometry between horizontal and vertical movement of lens is studied for automatic vergence control of horizontal moving axis stereo camera. When the disparity of stereo remains contant, the horizontal movement of camera lens for image disparity and the vertical movement for image focus have linear geometry. Using this linearity, we can control the vergence of stereo camera only by focusing of stereo camera lens.

  • PDF

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

Determination of Epipolar Geometry for High Resolution Satellite Images

  • Noh Myoung-Jong;Cho Woosug
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.652-655
    • /
    • 2004
  • The geometry of satellite image captured by linear pushbroom scanner is different from that of frame camera image. Since the exterior orientation parameters for satellite image will vary scan line by scan line, the epipolar geometry of satellite image differs from that of frame camera image. As we know, 2D affine orientation for the epipolar image of linear pushbroom scanners system are well-established by using the collinearity equation (Testsu Ono, 1999). Also, another epipolar geometry of linear pushbroom scanner system is recently established by Habib(2002). He reported that the epipolar geometry of linear push broom satellite image is realized by parallel projection based on 2D affine models. Here, in this paper, we compared the Ono's method with Habib's method. In addition, we proposed a method that generates epipolar resampled images. For the experiment, IKONOS stereo images were used in generating epipolar images.

  • PDF

On Design of Visual Servoing using an Uncalibrated Camera in 3D Space

  • Morita, Masahiko;Kenji, Kohiyama;Shigeru, Uchikado;Lili, Sun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1121-1125
    • /
    • 2003
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. We use a pinhole camera model as the camera one. The essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. These play an important role in designing visual servoing. For easy understanding of the proposed method we first show a design in case of the calibrated camera. The design is constructed by 4 steps and the directional motion of the robot arm is fixed only to a constant direction. This means that an estimated epipole denotes the direction, to which the robot arm translates in 3D space, on the image plane.

  • PDF

A Study on the End Mill Wear Detection by the Pattern Recognition Method in the Machine Vision (머신비젼으로 패턴 인식기법에 의한 엔드밀 마모 검출에 관한 연구)

  • 이창희;조택동
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.4
    • /
    • pp.223-229
    • /
    • 2003
  • Tool wear monitoring is an important technique in the flexible manufacturing system. This paper studies the end mill wear detection using CCD camera and pattern recognition method. When the end mill working in the machining center, the bottom edge of the end mill geometry change, this information is used. The CCD camera grab the new and worn tool geometry and the area of the tool geometry was compared. In this result, when the values of the subtract worn tool from new tool end in 200 pixels, it decides the tool life. This paper proposed the new method of the end mill wear detection.

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

  • Kim, Jong-Cheol;Suga, Yasuo
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.6
    • /
    • pp.663-673
    • /
    • 2007
  • This paper presents a new moving obstacle detection method using an optical flow in mobile robot with an omnidirectional camera. Because an omnidirectional camera consists of a nonlinear mirror and CCD camera, the optical flow pattern in omnidirectional image is different from the pattern in perspective camera. The geometry characteristic of an omnidirectional camera has influence on the optical flow in omnidirectional image. When a mobile robot with an omnidirectional camera moves, the optical flow is not only theoretically calculated in omnidirectional image, but also investigated in omnidirectional and panoramic images. In this paper, the panoramic image is generalized from an omnidirectional image using the geometry of an omnidirectional camera. In particular, Focus of expansion (FOE) and focus of contraction (FOC) vectors are defined from the estimated optical flow in omnidirectional and panoramic images. FOE and FOC vectors are used as reference vectors for the relative evaluation of optical flow. The moving obstacle is turned out through the relative evaluation of optical flows. The proposed algorithm is tested in four motions of a mobile robot including straight forward, left turn, right turn and rotation. The effectiveness of the proposed method is shown by the experimental results.

3D Reconstruction and Self-calibration based on Binocular Stereo Vision (스테레오 영상을 이용한 자기보정 및 3차원 형상 구현)

  • Hou, Rongrong;Jeong, Kyung-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.3856-3863
    • /
    • 2012
  • A 3D reconstruction technique from stereo images that requires minimal intervention from the user has been developed. The reconstruction problem consists of three steps of estimating specific geometry groups. The first step is estimating the epipolar geometry that exists between the stereo image pairs which includes feature matching in both images. The second is estimating the affine geometry, a process to find a special plane in the projective space by means of vanishing points. The third step, which includes camera self-calibration, is obtaining a metric geometry from which a 3D model of the scene could be obtained. The major advantage of this method is that the stereo images do not need to be calibrated for reconstruction. The results of camera calibration and reconstruction have shown the possibility of obtaining a 3D model directly from features in the images.

Augmented Reality Using Projective Information (비유클리드공간 정보를 사용하는 증강현실)

  • 서용덕;홍기상
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.87-102
    • /
    • 1999
  • We propose an algorithm for augmenting a real video sequence with views of graphics ojbects without metric calibration of the video camera by representing the motion of the video camera in projective space. We define a virtual camera, through which views of graphics objects are generated. attached to the real camera by specifying image locations of the world coordinate system of the virtual world. The virtual camera is decomposed into calibration and motion components in order to make full use of graphics tools. The projective motion of the real camera recovered from image matches has a function of transferring the virtual camera and makes the virtual camera move according to the motion of the real camera. The virtual camera also follows the change of the internal parameters of the real camera. This paper shows theoretical and experimental results of our application of non-metric vision to augmented reality.

  • PDF

3D reconstruction method without projective distortion from un-calibrated images (비교정 영상으로부터 왜곡을 제거한 3 차원 재구성방법)

  • Kim, Hyung-Ryul;Kim, Ho-Cul;Oh, Jang-Suk;Ku, Ja-Min;Kim, Min-Gi
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.391-394
    • /
    • 2005
  • In this paper, we present an approach that is able to reconstruct 3 dimensional metric models from un-calibrated images acquired by a freely moved camera system. If nothing is known of the calibration of either camera, nor the arrangement of one camera which respect to the other, then the projective reconstruction will have projective distortion which expressed by an arbitrary projective transformation. The distortion on the reconstruction is removed from projection to metric through self-calibration. The self-calibration requires no information about the camera matrices, or information about the scene geometry. Self-calibration is the process of determining internal camera parameters directly from multiply un-calibrated images. Self-calibration avoids the onerous task of calibrating cameras which needs to use special calibration objects. The root of the method is setting a uniquely fixed conic(absolute quadric) in 3D space. And it can make possible to figure out some way from the images. Once absolute quadric is identified, the metric geometry can be computed. We compared reconstruction image from calibrated images with the result by self-calibration method.

  • PDF