• Title/Summary/Keyword: 3D Depth

Search Result 2,619, Processing Time 0.029 seconds

LDI (Layered Depth Image) Representation Method using 3D GIS Implementation (LDI 표현방법을 이용한 3D GIS 구현)

  • Song Sang-Hun;Jung Young-Kee
    • KSCI Review
    • /
    • v.14 no.1
    • /
    • pp.231-239
    • /
    • 2006
  • Geographic information system (GIS) geography reference it talks the software system which is possible. When like this geographic information system in key feature trying to observe the problem which is an expression of geography information in the center, the research and development with 3 dimension expressions is active from 2 dimension expressions of existing and it is advanced. double meaning geography information which is huge to be quick, the place where it controls efficiently there is a many problem, the ring from the dissertation which it sees and 3 dimensions and efficient scene of the GIS rendering compared to the ring from hazard image base modeling and rendering compared to hazard proposal LDI (Layered Depth Images) it uses GIS rendering compared to the ring to sleep it does. It acquired the terrain data of 3 dimensions from thread side base method. terrain data of 3 dimensions which are acquired like this the place where it has depth information like this depth information in base and the LDI, it did it created. Also it was a traditional modeling method and 3DS-Max it used and it created the LDI. It used LDI information which is acquired like this and the GIS of more efficient 3 dimensions rendering compared to the possibility of ring it was.

  • PDF

Profilometry based on Structured Illumination with Hypercentric Optics (하이퍼센트릭 광학계를 이용한 구조 조명 형상 측정 방법)

  • Kim, Sungmin;Cho, Minguk;Lee, Maengjin;Hahn, Joonku
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.12
    • /
    • pp.1089-1093
    • /
    • 2013
  • Depth extraction using the structured illumination method is popularly applied since it has the benefit of measuring the object without contact. With multiple spatial frequencies and phase-shifting techniques, it is possible to extract the depth of objects with large discontinuity. For applications such as 3D (Three Dimensional) displays, 3D information of the object is required and is useful if corresponding to each view of the display. For this purpose, hypercentric optics is appropriate to measure the depth information of an object with a large field of view that is applicable for a 3D display. By experiment, we present the feasibility for phase-shifting profilometry using hypercentric optics to obtain the depth information of an object with the field of view appropriate for a 3D display.

Transformation of Stereoscopic Images for 3D Perception Improvement (입체영상의 3D 증강을 위한 입체영상 변환)

  • Gil, Jong In;Choi, Hwang Kyu;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.17 no.6
    • /
    • pp.911-923
    • /
    • 2012
  • Recently, 3DTV and 3D displays have been released in the market. Accordingly, the production of stereoscopic images has gained much interest. Stereoscopic image being composed of left and right images are currently delivered to viewers without any modifications. The researches on the enhancement of depth perception using high-frequency components and the re-production of natural color by color compensation have been carried out for 2D images. The application of such 2D technologies to 3D stereoscopic images is an aim of this paper. This paper proposes the enhancement of 3D perception by color transformation. For this, we propose a stereo matching method for obtaining a depth map and two color transformation methods such as contrast transformation and background darkening. The effectiveness of the proposed method was verified through experiments.

3D HDTV service method based on MPEG-C part.3 (MPEG-C part.3를 이용한 고화질 3D HDTV 전송방안)

  • Kang, Jeonho;Lee, Gilbok;Kim, Kyuheon;Cheong, Won-Sik;Yun, Kugjin
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.11a
    • /
    • pp.298-301
    • /
    • 2010
  • 최근 전자업계의 중요한 화두 중 하나는 3D이다. 3D 입체 영상 기술은 미디어 환경을 변화시키고, 이에 발맞춰 방송환경 또한 변하고 있다. 고화질 3D 입체 방송은 기존의 2D 방송 서비스와 호환성을 유지하면서 시간적으로 2D 프로그램과 3D 프로그램이 혼용되어 제공되는 입체 방송 서비스가 될 것이다. 기존의 2D 기반 디지털 방송 서비스 환경에서 고화질 3D 입체 방송을 할 수 없다. 현재 국제 표준기구인 MPEG에서는 3D 서비스 방안으로 ISO/IEC 23002-3(MPEG-C part.3)이 표준화되었다. MPEG-C part.3에서 부가데이터인 depth map과 parallax map을 사용한다. 하지만 공간 주파수가 높은 영상의 depth map과 parallax map은 객체의 경계가 확실하지 않아 3D 입체 구현 시 객체의 경계면이 뭉개질 수 있다. 따라서 본 논문은 고화질 3D 입체 방송 서비스를 위한 전송 방안을 제시하고 있으며, MPEG-C part.3를 이용한 스테레오스코픽 영상 전송방안에 대해 소개한다.

  • PDF

Dimensional Hangeul Font (3차원 한글 Font에 관한 연구)

  • Ji, Kyung-Hee;Cho, Dong-Sub
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.477-480
    • /
    • 1989
  • In this paper, we introduce the three-dimensional Korean character display by using 2D fonts and character depth. Character segments are designed by the set of vertex at run time. Character depth is applied for 3D visualization. And, the variation of eye-point and distance of object is used for 3D character animation.

  • PDF

CALOS : Camera And Laser for Odometry Sensing (CALOS : 주행계 추정을 위한 카메라와 레이저 융합)

  • Bok, Yun-Su;Hwang, Young-Bae;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.180-187
    • /
    • 2006
  • This paper presents a new sensor system, CALOS, for motion estimation and 3D reconstruction. The 2D laser sensor provides accurate depth information of a plane, not the whole 3D structure. On the contrary, the CCD cameras provide the projected image of whole 3D scene, not the depth of the scene. To overcome the limitations, we combine these two types of sensors, the laser sensor and the CCD cameras. We develop a motion estimation scheme appropriate for this sensor system. In the proposed scheme, the motion between two frames is estimated by using three points among the scan data and their corresponding image points, and refined by non-linear optimization. We validate the accuracy of the proposed method by 3D reconstruction using real images. The results show that the proposed system can be a practical solution for motion estimation as well as for 3D reconstruction.

  • PDF

Analysis of Relationship between Objective Performance Measurement and 3D Visual Discomfort in Depth Map Upsampling (깊이맵 업샘플링 방법의 객관적 성능 측정과 3D 시각적 피로도의 관계 분석)

  • Gil, Jong In;Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.31-43
    • /
    • 2014
  • A depth map is an important component for stereoscopic image generation. Since the depth map acquired from a depth camera has a low resolution, upsamling a low-resolution depth map to a high-resolution one has been studied past decades. Upsampling methods are evaluated by objective evaluation tools such as PSNR, Sharpness Degree, Blur Metric. As well, the subjective quality is compared using virtual views generated by DIBR (depth image based rendering). However, works on the analysis of the relation between depth map upsampling and stereoscopic images are relatively few. In this paper, we investigate the relationship between subjective evaluation of stereoscopic images and objective performance of upsampling methods using cross correlation and linear regression. Experimental results demonstrate that the correlation of edge PSNR and visual fatigue is the highest and the blur metric has lowest correlation. Further, from the linear regression, we found relative weights of objective measurements. Further we introduce a formulae that can estimate 3D performance of conventional or new upsampling methods.

Recent Trends of Weakly-supervised Deep Learning for Monocular 3D Reconstruction (단일 영상 기반 3차원 복원을 위한 약교사 인공지능 기술 동향)

  • Kim, Seungryong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.70-78
    • /
    • 2021
  • Estimating 3D information from a single image is one of the essential problems in numerous applications. Since a 2D image inherently might originate from an infinite number of different 3D scenes, thus 3D reconstruction from a single image is notoriously challenging. This challenge has been overcame by the advent of recent deep convolutional neural networks (CNNs), by modeling the mapping function between 2D image and 3D information. However, to train such deep CNNs, a massive training data is demanded, but such data is difficult to achieve or even impossible to build. Recent trends thus aim to present deep learning techniques that can be trained in a weakly-supervised manner, with a meta-data without relying on the ground-truth depth data. In this article, we introduce recent developments of weakly-supervised deep learning technique, especially categorized as scene 3D reconstruction and object 3D reconstruction, and discuss limitations and further directions.

Real-Virtual Fusion Hologram Generation System using RGB-Depth Camera (RGB-Depth 카메라를 이용한 현실-가상 융합 홀로그램 생성 시스템)

  • Song, Joongseok;Park, Jungsik;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.866-876
    • /
    • 2014
  • Generating of digital hologram of video contents with computer graphics(CG) requires natural fusion of 3D information between real and virtual. In this paper, we propose the system which can fuse real-virtual 3D information naturally and fast generate the digital hologram of fused results using multiple-GPUs based computer-generated-hologram(CGH) computing part. The system calculates camera projection matrix of RGB-Depth camera, and estimates the 3D information of virtual object. The 3D information of virtual object from projection matrix and real space are transmitted to Z buffer, which can fuse the 3D information, naturally. The fused result in Z buffer is transmitted to multiple-GPUs based CGH computing part. In this part, the digital hologram of fused result can be calculated fast. In experiment, the 3D information of virtual object from proposed system has the mean relative error(MRE) about 0.5138% in relation to real 3D information. In other words, it has the about 99% high-accuracy. In addition, we verify that proposed system can fast generate the digital hologram of fused result by using multiple GPUs based CGH calculation.

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.