• Title/Summary/Keyword: one camera

Search Result 1,583, Processing Time 0.026 seconds

Establishment of Test Field for Aerial Camera Calibration (항공 카메라 검정을 위한 테스트 필드 구축방안)

  • Lee, Jae-One;Yoon, Jong-Seong;Sin, Jin-Soo;Yun, Bu-Yeol
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.2
    • /
    • pp.67-76
    • /
    • 2008
  • Recently, one of the most outstanding technological characteristics of aerial survey is an application of Direct Georeferencing, which is based on the integration of main sensing sensors such as aerial camera or Lidar with positioning sensors GPS and IMU. In addition, a variety of digital aerial mapping cameras is developed and supplied with the verification of their technical superiority and applicability. In accordance with this requirement, the development of a multi-looking aerial photographing system is just making 3-D information acquisition and texture mapping possible for the dead areas arising from building side and high terrain variation where the use of traditional phptogrammetry is not valid. However, the development of a multi-looking camera integrating different sensors and multi-camera array causes some problems to conduct time synchronization among sensors and their geometric and radiometric calibration. The establishment of a test field for aerial sensor calibration is absolutely necessary to solve this problem. Therefore, this paper describes investigations for photogrammetric Test Field of foreign countries and suggest an establishment scheme for domestic test field.

  • PDF

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

EM Development of Dual Head Star Tracker for STSAT-2 (과학기술위성2호의 이중 머리 별 추적기 개발)

  • Sin, Il-Sik;Lee, Seong-Ho;Yu, Chang-Wan;Nam, Myeong-Ryong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.2
    • /
    • pp.96-100
    • /
    • 2006
  • We develop the Dual Head Star Tracker (DHST) to obtain the attitude information of science and Technology Satellite2 (STSAT-2). Because most of star sensor has only one head camera, star recognition is impossible when camera point to sun or earth. We therefore considered the DHST which can obtain star images from two spots simultaneously. That is, even though we fail a star recognition from an image obtained by one camera, it is possible to recognize stars from an image obtained by the other camera. In this paper, we introduce engineer model (EM) of the DHST and propose a star recognition and a star track algorithm.

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Calibration of Thermal Camera with Enhanced Image (개선된 화질의 영상을 이용한 열화상 카메라 캘리브레이션)

  • Kim, Ju O;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.621-628
    • /
    • 2021
  • This paper proposes a method to calibrate a thermal camera with three different perspectives. In particular, the intrinsic parameters of the camera and re-projection errors were provided to quantify the accuracy of the calibration result. Three lenses of the camera capture the same image, but they are not overlapped, and the image resolution is worse than the one captured by the RGB camera. In computer vision, camera calibration is one of the most important and fundamental tasks to calculate the distance between camera (s) and a target object or the three-dimensional (3D) coordinates of a point in a 3D object. Once calibration is complete, the intrinsic and the extrinsic parameters of the camera(s) are provided. The intrinsic parameters are composed of the focal length, skewness factor, and principal points, and the extrinsic parameters are composed of the relative rotation and translation of the camera(s). This study estimated the intrinsic parameters of thermal cameras that have three lenses of different perspectives. In particular, image enhancement based on a deep learning algorithm was carried out to improve the quality of the calibration results. Experimental results are provided to substantiate the proposed method.

Fabrication of the Imaging Lens for Mobile Camera using Embossing Method (엠보싱 공법에 의한 카메라 모듈용 광학렌즈 성형기법에 대한 연구)

  • Lee, C.H.;Jin, Y.S.;Noh, J.E.;Kim, S.H.;Jang, I.C.
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2007.05a
    • /
    • pp.79-83
    • /
    • 2007
  • We have developed a compact and cost-effective camera module on the basis of wafer-scale replication technology. A multiple-layered structure of several aspheric lenses in a mobile camera module is first assembled by bonding multiple glass-wafers on which 2-dimensional replica arrays of identical aspheric lenses are UV-embossed, followed by dicing the stacked wafers and packaging them with image sensor chips. We have demonstrated a VGA camera module fabricated by the wafer-scale replication processing with various UV-curable polymers having refractive indices between 1.4 and 1.6, and with three different glass-wafers of which both surfaces are embossed as aspheric lenses having 200 um sag-height and aspheric-coefficients of lens polynomials up to tenth-order. We have found that precise compensation in material shrinkage of the polymer materials is one of the most technical challenges, in order to achieve a higher resolution in wafer-scaled lenses for mobile camera modules.

  • PDF

SATELLITE ORBIT AND ATTITUDE MODELING FOR GEOMETRIC CORRECTION OF LINEAR PUSHBROOM IMAGES

  • Park, Myung-Jin;Kim, Tae-Jung
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.543-547
    • /
    • 2002
  • In this paper, we introduce a more improved camera modeling method for linear pushbroom images than the method proposed by Orun and Natarajan(ON). ON model shows an accuracy of within 1 pixel if more than 10 ground control points(GCPs) are provided. In general, there is high correlation between platform position and attitude parameters but ON model ignores attitude variation in order to overcome such correlation. We propose a new method that obtains an optimal solution set of parameters without ignoring the attitude variation. We first assume that attitude parameters are constant and estimate platform position's. Then we estimate platform attitude parameters using the values of estimated position parameters. As a result, we can set up an accurate camera model for a linear pushbroom satellite scene. In particular, we can apply the camera model to its surrounding scenes because our model provide sufficient information on satellite's position and attitude not only for a single scene but also for a whole imaging segment. We tested on two images: one with a pixel size 6.6m$\times$6.6m acquired from EOC(Electro Optical Camera), and the other with a pixel size 10m$\times$l0m acquired from SPOT. Our camera model procedures were applied to the images and gave satisfying results. We had obtained the root mean square errors of 0.5 pixel and 0.3 pixel with 25 GCPs and 23 GCPs, respectively.

  • PDF

A Study on Concrete Efflorescence Assessment using Hyperspectral Camera (초분광 카메라를 이용한 콘크리트 백화 평가에 관한 연구)

  • Kim, Byunghyun;Kim, Daemyung;Cho, Soojin
    • Journal of the Korean Society of Safety
    • /
    • v.32 no.6
    • /
    • pp.98-103
    • /
    • 2017
  • In Korea, the guideline for the bridge safety inspection requests to assess surface degradation, including crack, efflorescence, spalling, and so on, for the rating of concrete bridges. Currently, the assessment of efflorescence is performed based on the visual inspection of expertized engineers, which may result in subjective inspection result. In this study, a novel method using a hyperspectral camera is proposed for objective and accurate assessment of concrete efflorescence. The hyperspectral camera acquires the light intensity for a number of continuous spectral bands of light for each pixel in an image, which makes the hyperspectral imaging technique provides more detailed information than a color camera that collects intensity for only three bands corresponding to RGB (red, green, and blue) colors. A stepwise assessment algorithm is proposed based on the spectral features to decompose efflorescence area from the inspected concrete area. The algorithm is tested in the laboratory test using two concrete specimens, one of which is dark colored with efflorescence on a surface while the other is bright concrete without efflorescence. The test shows high accuracy and applicability of the proposed efflorescence assessment using a hyperspectral camera.

Analysis of sideward footprint of Multi-view imagery by sidelap changing (횡중복도 변화에 따른 다각사진 Sideward Footprint 분석)

  • Seo, Sang-Il;Park, Seon-Dong;Kim, Jong-In;Yoon, Jong-Seong
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.53-56
    • /
    • 2010
  • An aerial multi-looking camera system equips itself with five separate cameras which enables acquiring one vertical image and four oblique images at the same time. This provides diverse information about the site compared to aerial photographs vertically. However, multi-looking Aerial Camera for building a 3D spatial information don't use a large-size CCD camera, do uses a medium-size CCD camera, if acquiring forward, backward, left and right imagery of Certain objects, Aerial photographing set overlap and sidelap must be considered. Especially, Sideward-looking camera set up by the sidelap to determine whether a particular object can be acquisition Through our research we analyzed of sideward footprint and aerial photographing efficiency of Multi-view imagery by sidelap changing.

  • PDF

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.