• Title/Summary/Keyword: 3D 카메라

Search Result 1,250, Processing Time 0.031 seconds

Multiple Camera Calibration for Panoramic 3D Virtual Environment (파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션)

  • 김세환;김기영;우운택
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.137-148
    • /
    • 2004
  • In this paper, we propose a new camera calibration method for rotating multi-view cameras to generate image-based panoramic 3D Virtual Environment. Since calibration accuracy worsens with an increase in distance between camera and calibration pattern, conventional camera calibration algorithms are not proper for panoramic 3D VE generation. To remedy the problem, a geometric relationship among all lenses of a multi-view camera is used for intra-camera calibration. Another geometric relationship among multiple cameras is used for inter-camera calibration. First camera parameters for all lenses of each multi-view camera we obtained by applying Tsai's algorithm. In intra-camera calibration, the extrinsic parameters are compensated by iteratively reducing discrepancy between estimated and actual distances. Estimated distances are calculated using extrinsic parameters for every lens. Inter-camera calibration arranges multiple cameras in a geometric relationship. It exploits Iterative Closet Point (ICP) algorithm using back-projected 3D point clouds. Finally, by repeatedly applying intra/inter-camera calibration to all lenses of rotating multi-view cameras, we can obtain improved extrinsic parameters at every rotated position for a middle-range distance. Consequently, the proposed method can be applied to stitching of 3D point cloud for panoramic 3D VE generation. Moreover, it may be adopted in various 3D AR applications.

3D Depth Estimation by a Single Camera (단일 카메라를 이용한 3D 깊이 추정 방법)

  • Kim, Seunggi;Ko, Young Min;Bae, Chulkyun;Kim, Dae Jin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.281-291
    • /
    • 2019
  • Depth from defocus estimates the 3D depth by using a phenomenon in which the object in the focal plane of the camera forms a clear image but the object away from the focal plane produces a blurred image. In this paper, algorithms are studied to estimate 3D depth by analyzing the degree of blur of the image taken with a single camera. The optimized object range was obtained by 3D depth estimation derived from depth from defocus using one image of a single camera or two images of different focus of a single camera. For depth estimation using one image, the best performance was achieved using a focal length of 250 mm for both smartphone and DSLR cameras. The depth estimation using two images showed the best 3D depth estimation range when the focal length was set to 150 mm and 250 mm for smartphone camera images and 200 mm and 300 mm for DSLR camera images.

The Research of Stereoscopic Camera Rig System for Implementation of 3D Stereoscopic image (3D 입체영상 구현을 위한 입체 카메라 리그 시스템 기술에 관한 연구)

  • Shin, Heung-Sub;Ramesh, Rohit;Chung, Wan-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.156-157
    • /
    • 2010
  • 최근 새로운 영상시장의 개척과 디지털 기술의 발전에 더불어 차세대 3D 입체영상기술에 대한 관심 및 수요가 증가함에 따라 높은 질의 입체영상 구현을 위한 연구들이 활발히 이루어지고 있다. 입체영상을 구현함에 있어 3D 입체 카메라 시스템의 기술이 중요한 요소로 작용되며, 특히 좌/우 영상 확보 및 카메라 동조, 결합시키는 특수 장비인 입체 카메라 리그 시스템 기술에 대한 연구 및 개발이 국내외에서 활발히 진행되고 있다 본 논문에서는 3D 입체영상 구현을 위한 카메라 리그 시스템의 원리 및 구조에 대해서 기술하고, 국내외 기술개발 동향을 살펴본다. 이에 높은 양질의 입체영상 구현을 위한 3D 입체 카메라 리그 시스템 기술의 개발 방향을 제시하고자 한다.

  • PDF

A Study on the Image-Based 3D Modeling Using Calibrated Stereo Camera (스테레오 보정 카메라를 이용한 영상 기반 3차원 모델링에 관한 연구)

  • 김효성;남기곤;주재흠;이철헌;설성욱
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.3
    • /
    • pp.27-33
    • /
    • 2003
  • The image-based 3D modeling is the technique of generating a 3D graphic model from images acquired using cameras. It is being researched as an alternative technique for the expensive 3D scanner. In this paper, we propose the image-based, 3D modeling system using calibrated stereo cameras. The proposed algorithm for rendering, 3D model consists of three steps, camera calibration, 3D reconstruction, and 3D registration step. In the camera calibration step, we estimate the camera matrix for the image aquisition camera. In the 3D reconstruction step, we calculate 3D coordinates using triangulation from corresponding points of the stereo image. In the 3D registration step, we estimate the transformation matrix that transforms individually reconstructed 3D coordinates to the reference coordinate to render the single 3D model. As shown the result, we generated relatively accurate 3D model.

  • PDF

Correcting 3D camera tracking data for video composition (정교한 매치무비를 위한 3D 카메라 트래킹 기법에 관한 연구)

  • Lee, Jun-Sang;Lee, Imgeun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.105-106
    • /
    • 2012
  • 일반적으로 CG 합성이라 하면 '자연스러운' 것을 잘된 CG영상이라고 한다. 이 때 촬영된 영상이 정지화면 일 수 만은 없다. 카메라가 움직이는 영상에서는 CG합성도 실사카메라 무빙에 맞게 정확한 정합이 되어야 자연스러운 영상이 된다. 이를 위해 합성단계에서 작업할 때 3D 카메라 트래킹 기술이 필요하다. 카메라트래킹은 촬영된 실사영상만으로 카메라의 3차원 이동정보와 광학적 파라미터 등 촬영시의 3차원 공간을 복원하는 과정을 포함하고 있다. 이 과정에서 카메라 트래킹에 대한 오류의 발생으로 실사와 CG의 합성에 대한 생산성에 많은 문제점을 가지고 있다. 본 논문에서는 이러한 문제를 해결하기 위하여 소프트웨어에서 트래킹데이터를 보정하는 방법을 제안한다.

  • PDF

3D Positioning Accuracy Estimation of DMC in Compliance with Introducing High Resolution Digital Aerial Camera (고해상도 디지털항공사진 카메라 도입에 따른 DMC의 3차원 위치결정 정확도 평가)

  • Hahm, Chang-Hahk;Chang, Hwi-Jeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.1
    • /
    • pp.743-750
    • /
    • 2009
  • Since aerial photogrammetry by analog camera began in 1972, recently, high resolution digital camera is actively introduced to improve efficiency of aerial photogrammetry. This study investigated the 3D positioning accuracy of DMC(Digital Mapping Camera) among various high resolution aerial digital cameras to be developed for photogrammetry. For the research, we installed control points in test field around Incheon, and acquired analog and digital aerial photographs. By comparing 3D positioning accuracies of analog and digital photographs, there are few difference between two cameras, and the 3D positioning accuracies of two cameras was somewhat increased in case of aerotriangulation using additional control points based on GPS/IMU EO data.

3D Depth Estimation by Using a Single Smart Phone Camera (단일 스마트폰 카메라를 이용한 3D 거리 추정 방법)

  • Bae, Chul Kyun;Ko, Young Min;Kim, Seung Gi;Kim, Dae Jin
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.240-243
    • /
    • 2018
  • 최근 VR(Virtual Reality)와 AR(Augmented Reality)의 발전에 따라 영상 또는 이미지에서 카메라와 물체 사이의 거리를 추정하는 기술에 대한 연구가 활발히 진행되고 있다. 본 논문에서는 카메라와 물체 사이의 거리 추정 방법 중에서 단일 카메라를 이용하여 촬영한 이미지의 흐림 정도를 분석하여 3D 거리를 추정하는 알고리즘을 연구한다. 특히 고가의 렌즈가 장착된 DSLR 카메라가 아닌 스마트폰 카메라 이미지에서 DFD를 이용한 거리 추정 방법 중 1개의 이미지를 이용한 3D 거리 추정 방법과 초점이 서로 다른 2개의 이미지를 결합하여 3D 거리를 추정하는 방법을 연구하고 최적회된 피사체 범위에 대해 연구하였다. 한 개의 이미지를 이용한 거리 추정에서는 카메라의 초점 거리를 200 mm로 설정할 때, 두 개의 이미지를 이용한 거리 추정에서는 두 이미지의 초점 거리를 각각 150 mm, 250 mm로 설정했을 때 가장 넓은 거리 추정 범위를 갖는다. 또한, 두 거리 추정 방법 모두 초점 거리가 가까울수록 가까운 물체의 거리 추정에 효율적인 것으로 나타났다.

  • PDF

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

Study of Image Production using Steadicam Effects for 3D Camera (3D 카메라 기반 스테디캠 효과를 적용한 영상제작에 관한연구)

  • Lee, Junsang;Park, Sungdae;Lee, Imgeun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.3035-3041
    • /
    • 2014
  • The steadicam effects is widely used in production of the 3D animation for natural camera movement. Conventional method for steadicam effects is using keyframe animation technique, which is annoying and time consuming process. Furthermore it is difficult and unnatural to simulate camera movement in real world. In this paper we propose a novel method for representing steadicam effects on virtual camera of 3D animation. We modeled a camera of real world into Maya production tools, considering gravity, mass and elasticity. The model is implemented with Python language, which is directly applied to Maya platform as a filter module. The proposed method reduces production time and improves production environment. It also makes more natural and realistic footage to maximize visual effects.

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.