• 제목/요약/키워드: Camera Geometry

검색결과 202건 처리시간 0.209초

Camera Motion Parameter Estimation Technique using 2D Homography and LM Method based on Invariant Features

  • Cha, Jeong-Hee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권4호
    • /
    • pp.297-301
    • /
    • 2005
  • In this paper, we propose a method to estimate camera motion parameter based on invariant point features. Typically, feature information of image has drawbacks, it is variable to camera viewpoint, and therefore information quantity increases after time. The LM(Levenberg-Marquardt) method using nonlinear minimum square evaluation for camera extrinsic parameter estimation also has a weak point, which has different iteration number for approaching the minimal point according to the initial values and convergence time increases if the process run into a local minimum. In order to complement these shortfalls, we, first propose constructing feature models using invariant vector of geometry. Secondly, we propose a two-stage calculation method to improve accuracy and convergence by using homography and LM method. In the experiment, we compare and analyze the proposed method with existing method to demonstrate the superiority of the proposed algorithms.

Entity Matching for Vision-Based Tracking of Construction Workers Using Epipolar Geometry (영상 내 건설인력 위치 추적을 위한 등극선 기하학 기반의 개체 매칭 기법)

  • Lee, Yong-Joo;Kim, Do-Wan;Park, Man-Woo
    • Journal of KIBIM
    • /
    • 제5권2호
    • /
    • pp.46-54
    • /
    • 2015
  • Vision-based tracking has been proposed as a means to efficiently track a large number of construction resources operating in a congested site. In order to obtain 3D coordinates of an object, it is necessary to employ stereo-vision theories. Detecting and tracking of multiple objects require an entity matching process that finds corresponding pairs of detected entities across the two camera views. This paper proposes an efficient way of entity matching for tracking of construction workers. The proposed method basically uses epipolar geometry which represents the relationship between the two fixed cameras. Each pixel coordinate in a camera view is projected onto the other camera view as an epipolar line. The proposed method finds the matching pair of a worker entity by comparing the proximity of the all detected entities in the other view to the epipolar line. Experimental results demonstrate its suitability for automated entity matching for 3D vision-based tracking of construction workers.

Epipolar Resampling for High Resolution Satellite Imagery Based on Parallel Projection (평행투영 기반의 고해상도 위성영상 에피폴라 재배열)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Chang, Hwi-Jeong;Jeong, Ji-Yeon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • 제15권4호
    • /
    • pp.81-88
    • /
    • 2007
  • The geometry of satellite image captured by linear CCD sensor is different from that of frame camera image. The fact that the exterior orientation parameters for satellite image with linear CCD sensor varies from scan line by scan line, causes the difference of image geometry between frame and linear CCD sensor. Therefore, we need the epipolar geometry for linear CCD image which differs from that of frame camera image. In this paper, we proposed a method of resampling linear CCD satellite image in epipolar geometry under the assumption that image is not formed in perspective projection but in parallel projection, and the sensor model is a 2D affine sensor model based on parallel projection. For the experiment, IKONOS stereo images, which are high resolution linear CCD images, were used and tested. As results, the spatial accuracy of 2D affine sensor model is investigated and the accuracy of epipolar resampled image with RFM was presented.

  • PDF

Epipolar Geometry of Line Cameras Moving with Constant Velocity and Attitude

  • Habib, Ayman F.;Morgan, Michel F.;Jeong, Soo;Kim, Kyung-Ok
    • ETRI Journal
    • /
    • 제27권2호
    • /
    • pp.172-180
    • /
    • 2005
  • Image resampling according to epipolar geometry is an important prerequisite for a variety of photogrammetric tasks. Established procedures for resampling frame images according to epipolar geometry are not suitable for scenes captured by line cameras. In this paper, the mathematical model describing epipolar lines in scenes captured by line cameras moving with constant velocity and attitude is established and analyzed. The choice of this trajectory is motivated by the fact that many line cameras can be assumed to follow such a flight path during the short duration of a scene capture (especially when considering space-borne imaging platforms). Experimental results from synthetic along-track and across-track stereo-scenes are presented. For these scenes, the deviations of the resulting epipolar lines from straightness, as the camera's angular field of view decreases, are quantified and presented.

  • PDF

Injection Characteristics with Valve Geometries for a Diesel Engine (디젤기관용 분사밸브 형상에 따른 분사특성)

  • 김성윤;오승우;박권하
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제27권6호
    • /
    • pp.745-752
    • /
    • 2003
  • Injection technology is one of the important technologies in a diesel engine. Many studies have done on the injection system. In this study, the fuel chamber geometry, the orifice ratio and the needle lift of the injection valve of a diesel engine for generating electricity are varied and tested. The injection pressure, duration and spray shapes are produced with pressure transducer, needle lift sensor and highspeed camera. The result shows that the nozzle hole size has influence on the rail pressure and injection duration sensuously.

Vision-based Camera Localization using DEM and Mountain Image (DEM과 산영상을 이용한 비전기반 카메라 위치인식)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • 제10권6호
    • /
    • pp.177-186
    • /
    • 2005
  • In this Paper. we propose vision-based camera localization technique using 3D information which is created by mapping of DEM and mountain image. Typically, image features for localization have drawbacks, it is variable to camera viewpoint and after time information quantify increases . In this paper, we extract invariance features of geometry which is irrelevant to camera viewpoint and estimate camera extrinsic Parameter through accurate corresponding Points matching by Proposed similarity evaluation function and Graham search method we also propose 3D information creation method by using graphic theory and visual clues, The Proposed method has the three following stages; point features invariance vector extraction, 3D information creation, camera extrinsic Parameter estimation. In the experiments, we compare and analyse the proposed method with existing methods to demonstrate the superiority of the proposed methods.

  • PDF

A Study on Estimating Skill of Smartphone Camera Position using Essential Matrix (필수 행렬을 이용한 카메라 이동 위치 추정 기술 연구)

  • Oh, Jongtaek;Kim, Hogyeom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제22권6호
    • /
    • pp.143-148
    • /
    • 2022
  • It is very important for metaverse, mobile robot, and user location services to analyze the images continuously taken using a mobile smartphone or robot's monocular camera to estimate the camera's location. So far, PnP-related techniques have been applied to calculate the position. In this paper, the camera's moving direction is obtained using the essential matrix in the epipolar geometry applied to successive images, and the camera's continuous moving position is calculated through geometrical equations. A new estimation method was proposed, and its accuracy was verified through simulation. This method is completely different from the existing method and has a feature that it can be applied even if there is only one or more matching feature points in two or more images.

Sum of Squares-Based Range Estimation of an Object Using a Single Camera via Scale Factor

  • Kim, Won-Hee;Kim, Cheol-Joong;Eom, Myunghwan;Chwa, Dongkyoung
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권6호
    • /
    • pp.2359-2364
    • /
    • 2017
  • This paper proposes a scale factor based range estimation method using a sum of squares (SOS) method. Many previous studies measured distance by using a camera, which usually required two cameras and a long computation time for image processing. To overcome these disadvantages, we propose a range estimation method for an object using a single moving camera. A SOS-based Luenberger observer is proposed to estimate the range on the basis of the Euclidean geometry of the object. By using a scale factor, the proposed method can realize a faster operation speed compared with the previous methods. The validity of the proposed method is verified through simulation results.

A Study on Intelligent Robot Bin-Picking System with CCD Camera and Laser Sensor (CCD카메라와 레이저 센서를 조합한 지능형 로봇 빈-피킹에 관한 연구)

  • Shin, Chan-Bai;Kim, Jin-Dae;Lee, Jeh-Won
    • Proceedings of the KIEE Conference
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.231-233
    • /
    • 2007
  • In this paper we present a new visual approach for the robust bin-picking in a two-step concept for a vision driven automatic handling robot. The technology described here is based on two types of sensors: 3D laser scanner and CCD video camera. The geometry and pose(position and orientation) information of bin contents was reconstructed from the camera and laser sensor. these information can be employed to guide the robotic arm. A new thinning algorithm and constrained hough transform method is also explained in this paper. Consequently, the developed bin-picking demonstrate the successful operation with 3D hole object.

  • PDF

Video-Based Augmented Reality without Euclidean Camera Calibration (유클리드 카메라 보정을 하지 않는 비디오 기반 증강현실)

  • Seo, Yong-Deuk
    • Journal of the Korea Computer Graphics Society
    • /
    • 제9권3호
    • /
    • pp.15-21
    • /
    • 2003
  • An algorithm is developed for augmenting a real video with virtual graphics objects without computing Euclidean information. Real motion of the camera is obtained in affine space by a direct linear method using image matches. Then, virtual camera is provided by determining the locations of four basis points in two input images as initialization process. The four pairs of 2D location and its 3D affine coordinates provide Euclidean orthographic projection camera through the whole video sequence. Our method has the capability of generating views of objects shaded by virtual light sources, because we can make use of all the functions of the graphics library written on the basis of Euclidean geometry. Our novel formulation and experimental results with real video sequences are presented.

  • PDF