• 제목/요약/키워드: 3D Scene Reconstruction

검색결과 64건 처리시간 0.022초

스테레오 영상을 이용한 자기보정 및 3차원 형상 구현 (3D Reconstruction and Self-calibration based on Binocular Stereo Vision)

  • 후영영;정경석
    • 한국산학기술학회논문지
    • /
    • 제13권9호
    • /
    • pp.3856-3863
    • /
    • 2012
  • 스테레오 영상으로부터 3차원 형상을 구현함에 있어 사용자의 개입을 최소로 필요로 하는 기법을 개발하였다. 형상구현은 특정 기하학 그룹을 평가하는 3단계로 이루어진다. 1단계는 영상에 존재하는 epipolar 기하 평가로 각 영상에서의 특정점들을 일치시킨다. 2단계는 소실점 방법을 이용하여 투영공간에서 특정평면을 찾는 affine 기하 평가이다. 3단계에서는 카메라의 자기보정을 포함하며 3차원 모델이 얻어질 수 있는 계량 기하 변수를 구한다. 이 방법의 장점은 형상구현을 위해 스테레오 영상을 보정할 필요가 없는 것으로, 그 구현가능성을 실증하였다.

Novel View Generation Using Affine Coordinates

  • Sengupta, Kuntal;Ohya, Jun
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1997년도 Proceedings International Workshop on New Video Media Technology
    • /
    • pp.125-130
    • /
    • 1997
  • In this paper we present an algorithm to generate new views of a scene, starting with images from weakly calibrated cameras. Errors in 3D scene reconstruction usually gets reflected in the quality of the new scene generated, so we seek a direct method for reprojection. In this paper, we use the knowledge of dense point matches and their affine coordinate values to estimate the corresponding affine coordinate values in the new scene. We borrow ideas from the object recognition literature, and extend them significantly to solve the problem of reprojection. Unlike the epipolar line intersection algorithms for reprojection which requires at least eight matched points across three images, we need only five matched points. The theory of reprojection is used with hardware based rendering to achieve fast rendering. We demonstrate our results of novel view generation from stereopairs for arbitrary locations of the virtual camera.

  • PDF

Optical Encryption and Information Authentication of 3D Objects Considering Wireless Channel Characteristics

  • Lee, In-Ho;Cho, Myungjin
    • Journal of the Optical Society of Korea
    • /
    • 제17권6호
    • /
    • pp.494-499
    • /
    • 2013
  • In this paper, we present an optical encryption and information authentication of 3D objects considering wireless channel characteristics. Using the optical encryption such as double random phase encryption (DRPE) and 3D integral imaging, a 3D scene with encryption can be transmitted. However, the wireless channel causes the noise and fading effects of the 3D transmitted encryption data. When the 3D encrypted data is transmitted via wireless channel, the information may be lost or distorted because there are a lot of factors such as channel noise, propagation fading, and so on. Thus, using digital modulation and maximum likelihood (ML) detection, the noise and fading effects are mitigated, and the encrypted data is estimated well at the receiver. In addition, using computational volumetric reconstruction of integral imaging and advanced correlation filters, the noise effects may be remedied and 3D information may be authenticated. To prove our method, we carry out an optical experiment for sensing 3D information and simulation for optical encryption with DRPE and authentication with a nonlinear correlation filter. To the best of our knowledge, this is the first report on optical encryption and information authentication of 3D objects considering the wireless channel characteristics.

비교정 영상으로부터 왜곡을 제거한 3 차원 재구성방법 (3D reconstruction method without projective distortion from un-calibrated images)

  • 김형률;김호철;오장석;구자민;김민기
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.391-394
    • /
    • 2005
  • In this paper, we present an approach that is able to reconstruct 3 dimensional metric models from un-calibrated images acquired by a freely moved camera system. If nothing is known of the calibration of either camera, nor the arrangement of one camera which respect to the other, then the projective reconstruction will have projective distortion which expressed by an arbitrary projective transformation. The distortion on the reconstruction is removed from projection to metric through self-calibration. The self-calibration requires no information about the camera matrices, or information about the scene geometry. Self-calibration is the process of determining internal camera parameters directly from multiply un-calibrated images. Self-calibration avoids the onerous task of calibrating cameras which needs to use special calibration objects. The root of the method is setting a uniquely fixed conic(absolute quadric) in 3D space. And it can make possible to figure out some way from the images. Once absolute quadric is identified, the metric geometry can be computed. We compared reconstruction image from calibrated images with the result by self-calibration method.

  • PDF

영상합성을 위한 3D 공간 해석 및 조명환경의 재구성 (3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis)

  • 황용호;홍현기
    • 한국게임학회 논문지
    • /
    • 제6권2호
    • /
    • pp.45-50
    • /
    • 2006
  • 실 세계 공간에 가상 물체를 사실적으로 합성하기 위해서는 공간 내에 존재하는 조명정보 등을 분석해야 한다. 본 논문에서는 카메라에 대한 사전 교정(calibration)없이 카메라 및 조명의 위치 등을 추정하는 새로운 조명공간 재구성 방법이 제안된다. 먼저, 어안렌즈(fisheye lens)로부터 얻어진 전방향(omni-directional) 다중 노출 영상을 이용해 HDR (High Dynamic Range) 래디언스 맵(radiance map)을 생성한다. 그리고 다수의 대응점으로부터 카메라의 위치를 추정한 다음, 방향벡터를 이용해 조명의 위치를 재구성한다. 또한 대상 공간 내 많은 영향을 미치는 전역 조명과 일부 지역에 국한되어 영향을 주는 방향성을 갖는 지역 조명으로 분류하여 조명 환경을 재구성한다. 재구성된 조명환경 내에서 분산광선추적(distributed ray tracing) 방법으로 렌더링한 결과로부터 사실적인 합성영상이 얻어짐을 확인하였다. 제안된 방법은 카메라의 사전 교정 등이 필요하지 않으며 조명공간을 자동으로 재구성할 수 있는 장점이 있다.

  • PDF

Geometric Regualrization of Irregular Building Polygons: A Comparative Study

  • Sohn, Gun-Ho;Jwa, Yoon-Seok;Tao, Vincent;Cho, Woo-Sug
    • 한국측량학회지
    • /
    • 제25권6_1호
    • /
    • pp.545-555
    • /
    • 2007
  • 3D buildings are the most prominent feature comprising urban scene. A few of mega-cities in the globe are virtually reconstructed in photo-realistic 3D models, which becomes accessible by the public through the state-of-the-art online mapping services. A lot of research efforts have been made to develop automatic reconstruction technique of large-scale 3D building models from remotely sensed data. However, existing methods still produce irregular building polygons due to errors induced partly by uncalibrated sensor system, scene complexity and partly inappropriate sensor resolution to observed object scales. Thus, a geometric regularization technique is urgently required to rectify such irregular building polygons that are quickly captured from low sensory data. This paper aims to develop a new method for regularizing noise building outlines extracted from airborne LiDAR data, and to evaluate its performance in comparison with existing methods. These include Douglas-Peucker's polyline simplication, total least-squared adjustment, model hypothesis-verification, and rule-based rectification. Based on Minimum Description Length (MDL) principal, a new objective function, Geometric Minimum Description Length (GMDL), to regularize geometric noises is introduced to enhance the repetition of identical line directionality, regular angle transition and to minimize the number of vertices used. After generating hypothetical regularized models, a global optimum of the geometric regularity is achieved by verifying the entire solution space. A comparative evaluation of the proposed geometric regulator is conducted using both simulated and real building vectors with various levels of noise. The results show that the GMDL outperforms the selected existing algorithms at the most of noise levels.

깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법 (Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images)

  • 엄기문;안충현;이수인;김강연;이관행
    • 방송공학회논문지
    • /
    • 제9권3호
    • /
    • pp.185-195
    • /
    • 2004
  • 본 논문에서는 정확한 3차원 장면복원을 위한 다중깊이맵 융합기법을 제안한다. 제안한 기법은 수동적 3차원 정보획득 방법인 스테레오 정합기법과 능동적 3차원 정보획득 방법인 깊이정보 카메라로부터 얻어진 다중깊이맵을 융합한다. 전통적인 두 개의 스테레오 영상 간에 변이정보를 추정하는 전통적 스테레오 정합기법은 차폐 영역과 텍스쳐가 적은 영역에서 변이 오차를 많이 발생한다. 또한 깊이정보 카메라를 이용한 깊이맵은 비교적 정확한 깊이정보를 얻을 수 있으나, 잡음이 많이 포함되며, 측정 가능한 깊이의 범위가 제한되어 있다. 따라서 본 논문에서는 이러한 두 기법의 단점을 극복하고, 상호 보완하기 위하여 이 두 기법에 의해 얻어진다. 중깊이맵의 변이 또는 깊이값을 적절하게 선택하기 위한 깊이맵 융합기법을 제안한다. 3-시점 영상으로부터 가운데 시점을 기준으로 좌우 영상에 대해 두 개의 변이맵들을 각각 얻으며, 가운데 시점 카메라에 설치된 깊이정보 카메라로부터 얻어진 깊이맵들 간에 위치와 깊이값을 일치시키기 위한 전처리를 행한 다음. 각 화소 위치의 텍스쳐 정보, 깊이맵 분포 등에 기반하여 적절한 깊이값을 선택한다. 제안한 기법의 컴퓨터 모의실험 결과. 일부 배경 영역에서 깊이맵의 정확도가 개선됨을 볼 수 있었다.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • 대한원격탐사학회지
    • /
    • 제39권1호
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

3D Visualization of Partially Occluded Objects Using Axially Distributed Image Sensing With a Wide-Angle Lens

  • Kim, Nam-Woo;Hong, Seok-Min;Lee, Hoon Jae;Lee, Byung-Gook;Lee, Joon-Jae
    • Journal of the Optical Society of Korea
    • /
    • 제18권5호
    • /
    • pp.517-522
    • /
    • 2014
  • In this paper we propose an axially distributed image-sensing method with a wide-angle lens to capture the wide-area scene of 3D objects. A lot of parallax information can be collected by translating the wide-angle camera along the optical axis. The recorded wide-area elemental images are calibrated using compensation of radial distortion. With these images we generate volumetric slice images using a computational reconstruction algorithm based on ray back-projection. To show the feasibility of the proposed method, we performed optical experiments for visualization of a partially occluded 3D object.

TEST OF A LOW COST VEHICLE-BORNE 360 DEGREE PANORAMA IMAGE SYSTEM

  • Kim, Moon-Gie;Sung, Jung-Gon
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2008년도 International Symposium on Remote Sensing
    • /
    • pp.137-140
    • /
    • 2008
  • Recently many areas require wide field of view images. Such as surveillance, virtual reality, navigation and 3D scene reconstruction. Conventional camera systems have a limited filed of view and provide partial information about the scene. however, omni directional vision system can overcome these disadvantages. Acquiring 360 degree panorama images requires expensive omni camera lens. In this study, 360 degree panorama image was tested using a low cost optical reflector which captures 360 degree panoramic views with single shot. This 360 degree panorama image system can be used with detailed positional information from GPS/INS. Through this study result, we show 360 degree panorama image is very effective tool for mobile monitoring system.

  • PDF