• Title/Summary/Keyword: 3D image reconstruction

Search Result 591, Processing Time 0.034 seconds

Essential Computer Vision Methods for Maximal Visual Quality of Experience on Augmented Reality

  • Heo, Suwoong;Song, Hyewon;Kim, Jinwoo;Nguyen, Anh-Duc;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.2
    • /
    • pp.39-45
    • /
    • 2016
  • The augmented reality is the environment which consists of real-world view and information drawn by computer. Since the image which user can see through augmented reality device is a synthetic image composed by real-view and virtual image, it is important to make the virtual image generated by computer well harmonized with real-view image. In this paper, we present reviews of several works about computer vision and graphics methods which give user realistic augmented reality experience. To generate visually harmonized synthetic image which consists of a real and a virtual image, 3D geometry and environmental information such as lighting or material surface reflectivity should be known by the computer. There are lots of computer vision methods which aim to estimate those. We introduce some of the approaches related to acquiring geometric information, lighting environment and material surface properties using monocular or multi-view images. We expect that this paper gives reader's intuition of the computer vision methods for providing a realistic augmented reality experience.

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.

Image Based 3D Reconstruction of Texture-less Objects for VR Contents

  • Hafeez, Jahanzeb;Lee, Seunghyun;Kwon, Soonchul;Hamacher, Alaric
    • International journal of advanced smart convergence
    • /
    • v.6 no.1
    • /
    • pp.9-17
    • /
    • 2017
  • Recent development in virtual and augmented reality increases the demand for content in many different fields. One of the fast ways to create content for VR is 3D modeling of real objects. In this paper we propose a system to reconstruct three-dimensional models of real objects from the set of two-dimensional images under the assumption that the subject does not has distinct features. We explicitly consider an object that is made of one or more surfaces and radiant constant energy isotropically. We design a low cost portable multi camera rig system that is capable of capturing images simultaneously from all cameras. In order to evaluate the performance of the proposed system, comparison is made between 3D model and a CAD model. A simple algorithm is also proposed to acquire original texture or color of the subject. Using best pattern found after the experiments, 3D model of the Pyeongchang Olympic Mascot "Soohorang" is created to use as VR content.

Stereoscopic 3D Video Editing Method for Visual Comfort (시각적 편안함을 위한 입체적 삼차원 영상 편집 방법)

  • Kim, Jung-Un;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.4
    • /
    • pp.706-716
    • /
    • 2016
  • Each year, significant amounts of Stereoscopic 3D(S3D) contents have been introduced. However, viewers who enjoy the contents readily experience a sense of fatigue on account of various factors. Consequently, many improvement studies have been conducted with the domain of disparity by, for example, simply controlling the disparity or optimizing the reaction speed of viewers' eyes to vergence. However, such studies are limited to the disparity domain and therefore are restricted to a limited number of applications. In this study, we attempted to transcend this limitation and analyzed how a reconstruction in color and brightness, as well as disparity and other important features, affects eyes in terms of vergence adaptation. As a result, we found that, the higher the color similarity is, the better it positively affects vergence adaptation during viewing. Based on this analysis, we propose in this paper a similar color extraction method between takes that are applicable to real-life situations. In an evaluation, the algorithm was applied to publicly available S3D contents and produced a converted color optimized image. The vergence adaptation time of this applied contents was significantly decreased. Also it was minimized through color reconstruction, thereby, being resulted in enhancing viewer concentration.

Volumetric Visualization using Depth Information of Stereo Images (스테레오 영상에서의 깊이정보를 이용한 3차원 입체화)

  • 이성재;김정훈;윤성원;최종주;이명호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.541-541
    • /
    • 2000
  • This paper Presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we peformed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) algorithm. The final result image is helpful for the understanding of depth information visually.

  • PDF

Experimental results on Shape Reconstruction of Underwater Object Using Imaging Sonar (영상 소나를 이용한 수중 물체 외형 복원에 관한 기초 실험)

  • Lee, Yeongjun;Kim, Taejin;Choi, Jinwoo;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.10
    • /
    • pp.116-122
    • /
    • 2016
  • This paper proposes a practical object shape reconstruction method using an underwater imaging sonar. In order to reconstruct the object shape, three methods are utilized. Firstly, the vertical field of view of imaging sonar is modified to narrow angle to reduce an uncertainty of estimated 3D position. The wide vertical field of view makes the incorrect estimation result about the 3D position of the underwater object. Secondly, simple noise filtering and range detection methods are designed to extract a distance from the sonar image. Lastly, a low pass filter is adopted to estimate a probability of voxel occupancy. To demonstrate the proposed methods, object shape reconstruction for three sample objects was performed in a basin and results are explained.

Plane-based Computational Integral Imaging Reconstruction Method of Three-Dimensional Images based on Round-type Mapping Model (원형 매핑 모델에 기초한 3차원 영상의 평면기반 컴퓨터 집적 영상 재생 방식)

  • Shin, Dong-Hak;Kim, Nam-Woo;Lee, Joon-Jae;Kim, Eun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.5
    • /
    • pp.991-996
    • /
    • 2007
  • Recently, a computational reconstruction method using an integral imaging technique, which is a promise three-dimensional display technique, has been actively researched. This method is that 3-D images can be digitally reconstructed at the required output planes by superposition of all of the inversely enlarged elemental images by using a hypothetical pinhole array model. However, the conventional method mostly yields reconstructed images having a low-resolution, because there are some intensity irregularities with a grid structure at the reconstructed mage plane by using square-type elemental images. In this paper, to overcome this problem, we propose a novel computational integral imaging reconstruction (CIIR) method using round-type mapping model. Proposed CIIR method can overcome problems of non-uniformly reconstructed images caused from the conventional method and improve the resolution of 3-D images. To show the usefulness of the proposed method, both computational experiment and optical experiment are carried out and their results are presented.

Single-View Reconstruction of a Manhattan World from Line Segments

  • Lee, Suwon;Seo, Yong-Ho
    • International journal of advanced smart convergence
    • /
    • v.11 no.1
    • /
    • pp.1-10
    • /
    • 2022
  • Single-view reconstruction (SVR) is a fundamental method in computer vision. Often used for reconstructing human-made environments, the Manhattan world assumption presumes that planes in the real world exist in mutually orthogonal directions. Accordingly, this paper addresses an automatic SVR algorithm for Manhattan worlds. A method for estimating the directions of planes using graph-cut optimization is proposed. After segmenting an image from extracted line segments, the data cost function and smoothness cost function for graph-cut optimization are defined by considering the directions of the line segments and neighborhood segments. Furthermore, segments with the same depths are grouped during a depth-estimation step using a minimum spanning tree algorithm with the proposed weights. Experimental results demonstrate that, unlike previous methods, the proposed method can identify complex Manhattan structures of indoor and outdoor scenes and provide the exact boundaries and intersections of planes.

Evaluation of Standardized Uptake Value applying EQ PET across different PET/CT scanners and reconstruction (PET/CT 장비와 영상 재구성 차이에 따른 EQ PET을 이용한 표준섭취계수의 평가)

  • Yoon, Seok Hwan;Kim, Byung Jin;Moon, Il Sang;Lee, Hong Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.35-42
    • /
    • 2018
  • Purpose Standardized uptake value(SUV) has been widely used as a quantitative metric of uptake in PET/CT for diagnosis of malignant tumors and evaluation of tumor therapy response. However, the SUV depends on various factor including PET/CT scanner specifications and reconstruction parameter. The purpose of this study is to validate a EQ PET to evaluate SUV across different PET/CT systems. Materials and Methods First, NEMA IEC body phantom data were used to calculate the EQ filter for OSEM3D with PSF and TOF reconstruction from three different PET/CT systems in order to obtain EARL compliant recovery coefficients of each spheres. The Biograph true point 40 PET/CT images were reconstructed with a OSEM3D+PSF reconstruction, images of the Biograph mCT 40 and Biograph mCT 64 PET/CT scanners were reconstructed with a OSEM3D+PSF, OSEM3D+TOF, OSEM3D+PSF+TOF. Post reconstructions, the proprietary EQ filter was applied to the reconstruction data. Recovery coefficient can be estimated by ratio of measured to true activity concentration for spheres of different volume and coefficient variability(CV) value of RC for each sphere was compared. For clinical study, we compared SUVmax applying different reconstruction algorithms in FDG PET images of 61 patients with lung cancer using Biograph mCT 40 PET/CT scanner. Results For the phantom studied, the mean values of CV for OSEM3D, OSEM3D+PSF, OSEM3D+TOF and OSEM3D+PSF+TOF reconstructions were 0.05, 0.04, 0.04 and 0.03 respectively for RC. Application of the proprietary EQ filter, the mean values of CV for OSEM3D, OSEM3D+PSF, OSEM3D+TOF and OSEM3D+PSF+TOF reconstructions were 0.04, 0.03, 0.03 and 0.02 respectively for RC. Clinical study, there were no statistical significance of the difference applying EQ PET on SUVmax of 61 patients FDG PET image. (p=1.000) Conclusion This study indicates that CV values of RC in phantom were decreased after applying EQ PET for different PET/CT system and The EQ PET reduced reconstruction dependent variation in SUVs for 61 lung cancer patients, Therefore, EQ PET will be expected to provide accurate quantification when the patient is scanned on different PET/CT system.

3-D OCT Image Reconstruction for Precision Analysis of Rat Eye and Human Molar (쥐 눈과 인간 치아의 정밀한 단층정보 분석을 위한 OCT 3-D 영상 재구성)

  • Jeon, Ji-Hye;Na, Ji-Hoon;Yang, Yoon-Gi;Lee, Byeong-Ha;Lee, Chang-Su
    • The KIPS Transactions:PartB
    • /
    • v.14B no.6
    • /
    • pp.423-430
    • /
    • 2007
  • Optical coherence tomography(OCT) is a high resolution imaging system which can image the cross section of microscopic organs in a living tissue with about $1{\mu}m$ resolution. In this paper, we implement OCT system and acquire 2-D images of rat eye and human molar samples especially in the field of opthalmology and dentistry. In terms of 2-D images, we reconstruct 3-D OCT images which give us another inner structural information of target objects. OPEN-GL reduces the 3-D processing time 10 times less than MATLAB.