• Title/Summary/Keyword: one camera

Search Result 1,583, Processing Time 0.03 seconds

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

The Camera Calibration Parameters Estimation using The Projection Variations of Line Widths (선폭들의 투영변화율을 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Moon, Sung-Young;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2372-2374
    • /
    • 2003
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as focal length, scale factor, pose, orientations, and distance. But, radial lens distortion is not modeled. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1,2,3,4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Rigorous Modeling of the First Generation of the Reconnaissance Satellite Imagery

  • Shin, Sung-Woong;Schenk, Tony
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.223-233
    • /
    • 2008
  • In the mid 90's, the U.S. government released images acquired by the first generation of photo reconnaissance satellite missions between 1960 and 1972. The Declassified Intelligent Satellite Photographs (DISP) from the Corona mission are of high quality with an astounding ground resolution of about 2 m. The KH-4A panoramic camera system employed a scan angle of $70^{\circ}$ that produces film strips with a dimension of $55\;mm\;{\times}\;757\;mm$. Since GPS/INS did not exist at the time of data acquisition, the exterior orientation must be established in the traditional way by using control information and the interior orientation of the camera. Detailed information about the camera is not available, however. For reconstructing points in object space from DISP imagery to an accuracy that is comparable to high resolution (a few meters), a precise camera model is essential. This paper is concerned with the derivation of a rigorous mathematical model for the KH-4A/B panoramic camera. The proposed model is compared with generic sensor models, such as affine transformation and rational functions. The paper concludes with experimental results concerning the precision of reconstructed points in object space. The rigorous mathematical panoramic camera model for the KH-4A camera system is based on extended collinearity equations assuming that the satellite trajectory during one scan is smooth and the attitude remains unchanged. As a result, the collinearity equations express the perspective center as a function of the scan time. With the known satellite velocity this will translate into a shift along-track. Therefore, the exterior orientation contains seven parameters to be estimated. The reconstruction of object points can now be performed with the exterior orientation parameters, either by intersecting bundle rays with a known surface or by using the stereoscopic KH-4A arrangement with fore and aft cameras mounted an angle of $30^{\circ}$.

A Study on Usability Improvement of Camera Application of Galaxy S7 (갤럭시S7의 카메라 어플리케이션 사용성 개선에 관한연구)

  • Yu, Sung-ho;Lim, Seong-Taek
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.12
    • /
    • pp.249-255
    • /
    • 2017
  • Recently, among smart phone functions, cameras are one of the most popular functions and have become one of the most influential functions for purchasing smartphones. However, the basic camera application of the smart phone has a complicated user environment, which is causing many difficulties for the first time user. In this study, Galaxy S7, which is the newest Galaxy S series among the most used Galaxy S series in Korea, was selected and the usability test of the camera application was limited to shooting and editing sharing functions. As a result, first, improvement of icon graphic and label of text form should be provided at the same time to increase the recognition rate and attention of the icon. Second, it is necessary to simplify the structure and provide an intuitive interface in order to facilitate access to various modes and functions. Third, it is necessary to simplify the provision of personalized customized menus or functions in the development of the camera application because it causes a high failure rate and inconvenience in the special functions which are not widely used.

Vision based 3D Hand Interface Using Virtual Two-View Method (가상 양시점화 방법을 이용한 비전기반 3차원 손 인터페이스)

  • Bae, Dong-Hee;Kim, Jin-Mo
    • Journal of Korea Game Society
    • /
    • v.13 no.5
    • /
    • pp.43-54
    • /
    • 2013
  • With the consistent development of the 3D application technique, visuals are available at more realistic quality and are utilized in many applications like game. In particular, interacting with 3D objects in virtual environments, 3D graphics have led to a substantial development in the augmented reality. This study proposes a 3D user interface to control objects in 3D space through virtual two-view method using only one camera. To do so, homography matrix including transformation information between arbitrary two positions of camera is calculated and 3D coordinates are reconstructed by employing the 2D hand coordinates derived from the single camera, homography matrix and projection matrix of camera. This method will result in more accurate and quick 3D information. This approach may be advantageous with respect to the reduced amount of calculation needed for using one camera rather than two and may be effective at the same time for real-time processes while it is economically efficient.

Volume Calculation for Filling Up of Rubbish Using Stereo Camera and Uniform Mesh (스테레오 카메라와 균일 매시를 이용한 매립지의 환경감시를 위한 체적 계산 알고리즘)

  • Lee, Young-Dae;Cho, Sung-Youn;Kim, Kyung;Lee, Dong-Gyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.15-22
    • /
    • 2012
  • For the construction of safe and clear urban environment, it is necessary that we identify the rubbish waste volume and we know the accuracy volume. In this paper, we developed the algorithm which computes the waste volume using the stereo camera for enhancing the environment of waste repository. Using the stereo vision camera, we first computed the distortion parameters of stereo camera and then we obtained the points cloud of the object surface by measuring the target object. Regarding the points cloud as the input of the volume calculation algorithm, we obtained the waste volume of the target object. For this purpose, we suggested two volume calculation algorithm based on the uniform meshing method. The difference between the measured volume such as today's one and yesterday's one gives the reposit of waste volume. Using this approach, we can get the change of the waste volume repository by reading the volume reports weekly, monthly and yearly, so we can get quantitative statistics report of waste volume.

Real-Time Camera Tracking for Virtual Stud (가상스튜디오 구현을 위한 실시간 카메라 추적)

  • Park, Seong-Woo;Seo, Yong-Duek;Hong, Ki-Sang
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.90-103
    • /
    • 1999
  • In this paper, we present an overall algorithm for real-time camera parameter extraction which is one of key elements in implementing virtual studio. The prevailing mechanical methode for tracking cameras have several disadvantage such as the price, calibration with the camera and operability. To overcome these disadvantages we calculate camera parameters directly from the input image using computer-vision technique. When using zoom lenses, it requires real time calculation of lens distortion. But in Tsai algorithm, adopted for camera calibration, it can be calculated through nonlinear optimization in triple parameter space, which usually takes long computation time. We proposed a new method, separating lens distortion parameter from the other two parameters, so that it is reduced to nonlinear optimization in one parameter space, which can be computed fast enough for real time application.

  • PDF

Camera and LiDAR Sensor Fusion for Improving Object Detection (카메라와 라이다의 객체 검출 성능 향상을 위한 Sensor Fusion)

  • Lee, Jongseo;Kim, Mangyu;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.580-591
    • /
    • 2019
  • This paper focuses on to improving object detection performance using the camera and LiDAR on autonomous vehicle platforms by fusing detected objects from individual sensors through a late fusion approach. In the case of object detection using camera sensor, YOLOv3 model was employed as a one-stage detection process. Furthermore, the distance estimation of the detected objects is based on the formulations of Perspective matrix. On the other hand, the object detection using LiDAR is based on K-means clustering method. The camera and LiDAR calibration was carried out by PnP-Ransac in order to calculate the rotation and translation matrix between two sensors. For Sensor fusion, intersection over union(IoU) on the image plane with respective to the distance and angle on world coordinate were estimated. Additionally, all the three attributes i.e; IoU, distance and angle were fused using logistic regression. The performance evaluation in the sensor fusion scenario has shown an effective 5% improvement in object detection performance compared to the usage of single sensor.

3D reconstruction method without projective distortion from un-calibrated images (비교정 영상으로부터 왜곡을 제거한 3 차원 재구성방법)

  • Kim, Hyung-Ryul;Kim, Ho-Cul;Oh, Jang-Suk;Ku, Ja-Min;Kim, Min-Gi
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.391-394
    • /
    • 2005
  • In this paper, we present an approach that is able to reconstruct 3 dimensional metric models from un-calibrated images acquired by a freely moved camera system. If nothing is known of the calibration of either camera, nor the arrangement of one camera which respect to the other, then the projective reconstruction will have projective distortion which expressed by an arbitrary projective transformation. The distortion on the reconstruction is removed from projection to metric through self-calibration. The self-calibration requires no information about the camera matrices, or information about the scene geometry. Self-calibration is the process of determining internal camera parameters directly from multiply un-calibrated images. Self-calibration avoids the onerous task of calibrating cameras which needs to use special calibration objects. The root of the method is setting a uniquely fixed conic(absolute quadric) in 3D space. And it can make possible to figure out some way from the images. Once absolute quadric is identified, the metric geometry can be computed. We compared reconstruction image from calibrated images with the result by self-calibration method.

  • PDF

Noncontact 3-dimensional measurement using He-Ne laser and CCD camera (He-Ne 레이저와 CCD 카메라를 이용한 비접촉 3차원 측정)

  • Kim, Bong-chae;Jeon, Byung-cheol;Kim, Jae-do
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.11
    • /
    • pp.1862-1870
    • /
    • 1997
  • A fast and precise technique to measure 3-dimensional coordinates of an object is proposed. It is essential to take the 3-dimensional measurements of the object in design and inspection. Using this developed system a surface model of a complex shape can be constructed. 3-dimensional world coordinates are projected onto a camera plane by the perspective transformation, which plays an important role in this measurement system. According to the shape of the object two measuring methods are proposed. One is rotation of an object and the other is translation of measuring unit. Measuring speed depending on image processing time is obtained as 200 points per second. Measurement resolution i sexperimented by two parameters among others; the angle between the laser beam plane and the camera, and the distance between the camera and the object. As a result of these experiments, it was found that measurement resolution ranges from 0.3mm to 1.0mm. This constructed surface model could be used in manufacturing tools such as rapid prototyping machine.