• Title/Summary/Keyword: reprojection error

Search Result 10, Processing Time 0.028 seconds

3D Reconstruction using the Key-frame Selection from Reprojection Error (카메라 재투영 오차로부터 중요영상 선택을 이용한 3차원 재구성)

  • Seo, Yung-Ho;Kim, Sang-Hoon;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.38-46
    • /
    • 2008
  • Key-frame selection algorithm is defined as the process of selecting a necessary images for 3D reconstruction from the uncalibrated images. Also, camera calibration of images is necessary for 3D reconstuction. In this paper, we propose a new method of Key-frame selection with the minimal error for camera calibration. Using the full-auto-calibration, we estimate camera parameters for all selected Key-frames. We remove the false matching using the fundamental matrix computed by algebraic deviation from the estimated camera parameters. Finally we obtain 3D reconstructed data. Our experimental results show that the proposed approach is required rather lower time costs than others, the error of reconstructed data is the smallest. The elapsed time for estimating the fundamental matrix is very fast and the error of estimated fundamental matrix is similar to others.

Line-Based SLAM Using Vanishing Point Measurements Loss Function (소실점 정보의 Loss 함수를 이용한 특징선 기반 SLAM)

  • Hyunjun Lim;Hyun Myung
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.330-336
    • /
    • 2023
  • In this paper, a novel line-based simultaneous localization and mapping (SLAM) using a loss function of vanishing point measurements is proposed. In general, the Huber norm is used as a loss function for point and line features in feature-based SLAM. The proposed loss function of vanishing point measurements is based on the unit sphere model. Because the point and line feature measurements define the reprojection error in the image plane as a residual, linear loss functions such as the Huber norm is used. However, the typical loss functions are not suitable for vanishing point measurements with unbounded problems. To tackle this problem, we propose a loss function for vanishing point measurements. The proposed loss function is based on unit sphere model. Finally, we prove the validity of the loss function for vanishing point through experiments on a public dataset.

An Improved Fast Camera Calibration Method for Mobile Terminals

  • Guan, Fang-li;Xu, Ai-jun;Jiang, Guang-yu
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1082-1095
    • /
    • 2019
  • Camera calibration is an important part of machine vision and close-range photogrammetry. Since current calibration methods fail to obtain ideal internal and external camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration method, the new method introduces two-order radial distortion and tangential distortion models to establish the camera model with nonlinear distortion items. Meanwhile, the nonlinear least square L-M algorithm is used to optimize parameters iteration, the new method can quickly obtain high-precise internal and external camera parameters. The experimental results show that the new method improves the efficiency and precision of camera calibration. Terminals simulation experiment on PC indicates that the time consuming of parameter iteration reduced from 0.220 seconds to 0.063 seconds (0.234 seconds on mobile terminals) and the average reprojection error reduced from 0.25 pixel to 0.15 pixel. Therefore, the new method is an ideal mobile terminals camera calibration method which can expand the application range of 3D reconstruction and close-range photogrammetry technology on mobile terminals.

Accurate Camera Calibration Method for Multiview Stereoscopic Image Acquisition (다중 입체 영상 획득을 위한 정밀 카메라 캘리브레이션 기법)

  • Kim, Jung Hee;Yun, Yeohun;Kim, Junsu;Yun, Kugjin;Cheong, Won-Sik;Kang, Suk-Ju
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.919-927
    • /
    • 2019
  • In this paper, we propose an accurate camera calibration method for acquiring multiview stereoscopic images. Generally, camera calibration is performed by using checkerboard structured patterns. The checkerboard pattern simplifies feature point extraction process and utilizes previously recognized lattice structure, which results in the accurate estimation of relations between the point on 2-dimensional image and the point on 3-dimensional space. Since estimation accuracy of camera parameters is dependent on feature matching, accurate detection of checkerboard corner is crucial. Therefore, in this paper, we propose the method that performs accurate camera calibration method through accurate detection of checkerboard corners. Proposed method detects checkerboard corner candidates by utilizing 1-dimensional gaussian filters with succeeding corner refinement process to remove outliers from corner candidates and accurately detect checkerboard corners in sub-pixel unit. In order to verify the proposed method, we check reprojection errors and camera location estimation results to confirm camera intrinsic parameters and extrinsic parameters estimation accuracy.

Calibration Method of Plenoptic Camera using CCD Camera Model (CCD 카메라 모델을 이용한 플렌옵틱 카메라의 캘리브레이션 방법)

  • Kim, Song-Ran;Jeong, Min-Chang;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.2
    • /
    • pp.261-269
    • /
    • 2018
  • This paper presents a convenient method to estimate the internal parameters of plenoptic camera using CCD(charge-coupled device) camera model. The images used for plenoptic camera calibration generally use the checkerboard pattern used in CCD camera calibration. Based on the CCD camera model, the determinant of the plenoptic camera model can be derived through the relationship with the plenoptic camera model. We formulate four equations that express the focal length, the principal point, the baseline, and distance between the virtual camera and the object. By performing a nonlinear optimization technique, we solve the equations to estimate the parameters. We compare the estimation results with the actual parameters and evaluate the reprojection error. Experimental results show that the MSE(mean square error) is 0.309 and estimation values are very close to actual values.

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.

Relative RPCs Bias-compensation for Satellite Stereo Images Processing (고해상도 입체 위성영상 처리를 위한 무기준점 기반 상호표정)

  • Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.287-293
    • /
    • 2018
  • It is prerequisite to generate epipolar resampled images by reducing the y-parallax for accurate and efficient processing of satellite stereo images. Minimizing y-parallax requires the accurate sensor modeling that is carried out with ground control points. However, the approach is not feasible over inaccessible areas where control points cannot be easily acquired. For the case, a relative orientation can be utilized only with conjugate points, but its accuracy for satellite sensor should be studied because the sensor has different geometry compared to well-known frame type cameras. Therefore, we carried out the bias-compensation of RPCs (Rational Polynomial Coefficients) without any ground control points to study its precision and effects on the y-parallax in epipolar resampled images. The conjugate points were generated with stereo image matching with outlier removals. RPCs compensation was performed based on the affine and polynomial models. We analyzed the reprojection error of the compensated RPCs and the y-parallax in the resampled images. Experimental result showed one-pixel level of y-parallax for Kompsat-3 stereo data.

Automated Image Matching for Satellite Images with Different GSDs through Improved Feature Matching and Robust Estimation (특징점 매칭 개선 및 강인추정을 통한 이종해상도 위성영상 자동영상정합)

  • Ban, Seunghwan;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1257-1271
    • /
    • 2022
  • Recently, many Earth observation optical satellites have been developed, as their demands were increasing. Therefore, a rapid preprocessing of satellites became one of the most important problem for an active utilization of satellite images. Satellite image matching is a technique in which two images are transformed and represented in one specific coordinate system. This technique is used for aligning different bands or correcting of relative positions error between two satellite images. In this paper, we propose an automatic image matching method among satellite images with different ground sampling distances (GSDs). Our method is based on improved feature matching and robust estimation of transformation between satellite images. The proposed method consists of five processes: calculation of overlapping area, improved feature detection, feature matching, robust estimation of transformation, and image resampling. For feature detection, we extract overlapping areas and resample them to equalize their GSDs. For feature matching, we used Oriented FAST and rotated BRIEF (ORB) to improve matching performance. We performed image registration experiments with images KOMPSAT-3A and RapidEye. The performance verification of the proposed method was checked in qualitative and quantitative methods. The reprojection errors of image matching were in the range of 1.277 to 1.608 pixels accuracy with respect to the GSD of RapidEye images. Finally, we confirmed the possibility of satellite image matching with heterogeneous GSDs through the proposed method.

Calibration of Thermal Camera with Enhanced Image (개선된 화질의 영상을 이용한 열화상 카메라 캘리브레이션)

  • Kim, Ju O;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.621-628
    • /
    • 2021
  • This paper proposes a method to calibrate a thermal camera with three different perspectives. In particular, the intrinsic parameters of the camera and re-projection errors were provided to quantify the accuracy of the calibration result. Three lenses of the camera capture the same image, but they are not overlapped, and the image resolution is worse than the one captured by the RGB camera. In computer vision, camera calibration is one of the most important and fundamental tasks to calculate the distance between camera (s) and a target object or the three-dimensional (3D) coordinates of a point in a 3D object. Once calibration is complete, the intrinsic and the extrinsic parameters of the camera(s) are provided. The intrinsic parameters are composed of the focal length, skewness factor, and principal points, and the extrinsic parameters are composed of the relative rotation and translation of the camera(s). This study estimated the intrinsic parameters of thermal cameras that have three lenses of different perspectives. In particular, image enhancement based on a deep learning algorithm was carried out to improve the quality of the calibration results. Experimental results are provided to substantiate the proposed method.

Automatic Validation of the Geometric Quality of Crowdsourcing Drone Imagery (크라우드소싱 드론 영상의 기하학적 품질 자동 검증)

  • Dongho Lee ;Kyoungah Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.577-587
    • /
    • 2023
  • The utilization of crowdsourced spatial data has been actively researched; however, issues stemming from the uncertainty of data quality have been raised. In particular, when low-quality data is mixed into drone imagery datasets, it can degrade the quality of spatial information output. In order to address these problems, the study presents a methodology for automatically validating the geometric quality of crowdsourced imagery. Key quality factors such as spatial resolution, resolution variation, matching point reprojection error, and bundle adjustment results are utilized. To classify imagery suitable for spatial information generation, training and validation datasets are constructed, and machine learning is conducted using a radial basis function (RBF)-based support vector machine (SVM) model. The trained SVM model achieved a classification accuracy of 99.1%. To evaluate the effectiveness of the quality validation model, imagery sets before and after applying the model to drone imagery not used in training and validation are compared by generating orthoimages. The results confirm that the application of the quality validation model reduces various distortions that can be included in orthoimages and enhances object identifiability. The proposed quality validation methodology is expected to increase the utility of crowdsourced data in spatial information generation by automatically selecting high-quality data from the multitude of crowdsourced data with varying qualities.