• Title/Summary/Keyword: Camera Model

Search Result 1,509, Processing Time 0.033 seconds

A New Linear Explicit Camera Calibration Method (새로운 선형의 외형적 카메라 보정 기법)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.23 no.1
    • /
    • pp.66-71
    • /
    • 2014
  • Vision is the most important sensing capability for both men and sensory smart machines, such as intelligent robots. Sensed real 3D world and its 2D camera image can be related mathematically by a process called camera calibration. In this paper, we present a novel linear solution of camera calibration. Unlike most existing linear calibration methods, the proposed technique of this paper can identify camera parameters explicitly. Through the step-by-step procedure of the proposed method, the real physical elements of the perspective projection transformation matrix between 3D points and the corresponding 2D image points can be identified. This explicit solution will be useful for many practical 3D sensing applications including robotics. We verified the proposed method by using various cameras of different conditions.

PRACTICAL WAYS TO CALCULATE CAMERA LENS DISTORTION FOR REAL-TIME CAMERA CALIBRATION

  • Park, Seong-Woo;Hong, Ki-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.125-131
    • /
    • 1999
  • In this paper, we address practical methods for calculating camera lens distortion for real time applications. Although the lens distortion problem can be easily ignored for constant-parameter lenses, in the field of real-time camera calibrations, for zoom lenses a large number of calculations are needed to calculate the distortion. However, if the distortion can be calculated independently of the other camera parameter, we can easily calibrate a camera without the need for a large number of calculations. Based on Tsai's camera model, we propose two different methods for calculating lens distortion. These methods are so simple and require so few calculations that the lens distortion can be rapidly calculated even in real-time applications. The first method is to refer to the focal length - lens distortion Look Up Table(LUT), which is constructed in the initialization process. The second method is to use the relationship between the feature points found in the image. Experiments were carried out for both methods, results of which show that the proposed methods are favorably comparable in performance with non-real full optimization method.

A Study of Nondestructive Evaluation Using Scan type Magnetic Camera

  • Hwang, Ji-Seong;Lee, Jin-Yi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1830-1835
    • /
    • 2005
  • It is important to estimate the distribution of intensity of a magnetic field for application of magnetic method to industrial nondestructive evaluation. Magnetic camera provides the distribution of a quantitative magnetic field with homogeneous lift-off and same spatial resolution. And it is possible to interpret the distribution of the magnetic field when the dipole model is introduced. This study introduces the numerical and experimental considering of the quantitative evaluation of several size and shapes of the cracks using the magnetic field images of the scan type magnetic camera.

  • PDF

Compressed Sensing-based Multiple-target Tracking Algorithm for Ad Hoc Camera Sensor Networks

  • Lu, Xu;Cheng, Lianglun;Liu, Jun;Chen, Rongjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1287-1300
    • /
    • 2018
  • Target-tracking algorithm based on ad hoc camera sensor networks (ACSNs) utilizes the distributed observation capability of nodes to achieve accurate target tracking. A compressed sensing-based multiple-target tracking algorithm (CSMTTA) for ACSNs is proposed in this work based on the study of camera node observation projection model and compressed sensing model. The proposed algorithm includes reconfiguration of observed signals and evaluation of target locations. It reconfigures observed signals by solving the convex optimization of L1-norm least and forecasts node group to evaluate a target location by the motion features of the target. Simulation results show that CSMTTA can recover the subtracted observation information accurately under the condition of sparse sampling to a high target-tracking accuracy and accomplish the distributed tracking task of multiple mobile targets.

Analysis of convergent looking stereo camera model (교차 시각 스테레오 카메라 모델 해석)

  • 이적식
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.50-62
    • /
    • 1996
  • A parallel looking stereo camera was mainly used as an input sensor for digital image processing, image understanding and the extraction of 3 dimensional information. Theoretical analysis and performance evaluation are dealt in this paper for a convergent looking stereo camera model having a fixation point with the result of crossing optical axes. The quantization error, depth resolution and equidepth map due to digital pixels, and the misalignments effects of pan, tilt and roll angles are analyzed by using rhe relationship between the reference and image coordinate systems. Also horopter, epipolar lines, probability density functions of the depth error, and stereo fusion areas for the two camera models are discussed.

  • PDF

Assessment of a smartphone-based monitoring system and its application

  • Ahn, Hoyong;Choi, Chuluong;Yu, Yeon
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.3
    • /
    • pp.383-397
    • /
    • 2014
  • Information technology advances are allowing conventional surveillance systems to be combined with mobile communication technologies, creating ubiquitous monitoring systems. This paper proposes monitoring system that uses smart camera technology. We discuss the dependence of interior orientation parameters on calibration target sheets and compare the accuracy of a three-dimensional monitoring system with camera location calculated by space resectioning using a Digital Surface Model (DSM) generated from stereo images. A monitoring housing is designed to protect a camera from various weather conditions and to provide the camera for power generated from solar panel. A smart camera is installed in the monitoring housing. The smart camera is operated and controlled through an Android application. At last the accuracy of a three-dimensional monitoring system is evaluated using a DSM. The proposed system was then tested against a DSM created from ground control points determined by Global Positioning Systems (GPSs) and light detection and ranging data. The standard deviation of the differences between DSMs are less than 0.12 m. Therefore the monitoring system is appropriate for extracting the information of objects' position and deformation as well as monitoring them. Through incorporation of components, such as camera housing, a solar power supply, the smart camera the system can be used as a ubiquitous monitoring system.

An active stereo camera modeling (동적 스테레오 카메라 모델링)

  • Do, Kyoung-Mihn;Lee, Kwae-Hi
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.3
    • /
    • pp.297-304
    • /
    • 1997
  • In stereo vision, camera modeling is very important because the accuracy of the three dimensional locations depends considerably on it. In the existing stereo camera models, two camera planes are located in the same plane or on the optical axis. These camera models cannot be used in the active vision system where it is necessary to obtain two stereo images simultaneously. In this paper, we propose four kinds of stereo camera models for active stereo vision system where focal lengths of the two cameras are different and each camera is able to rotate independently. A single closed form solution is obtained for all models. The influence of the stereo camera model to the field of view, occlusion, and search area used for matching is shown in this paper. And errors due to inaccurate focal length are analyzed and simulation results are shown. It is expected that the three dimensional locations of objects are determined in real time by applying proposed stereo camera models to the active stereo vision system, such as a mobile robot.

  • PDF

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

A Image-based 3-D Shape Reconstruction using Pyramidal Volume Intersection (피라미드 볼륨 교차기법을 이용한 영상기반의 3차원 형상 복원)

  • Lee Sang-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.127-135
    • /
    • 2006
  • The image-based 3D modeling is the technique of generating a 3D graphic model from images acquired using cameras. It is being researched as an alternative technique for the expensive 3D scanner. In this paper, I propose the image-based 3D modeling system using calibrated camera. The proposed algorithm for rendering 3D model is consisted of three steps, camera calibration, 3D shape reconstruction and 3D surface generation step. In the camera calibration step, I estimate the camera matrix for the image aquisition camera. In the 3D shape reconstruction step, I calculate 3D volume data from silhouette using pyramidal volume intersection. In the 3D surface generation step, the reconstructed volume data is converted to 3D mesh surface. As shown the result, I generated relatively accurate 3D model.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF