• Title/Summary/Keyword: 3d depth image

Search Result 615, Processing Time 0.033 seconds

Three-dimensional image processing using integral imaging method (집적 영상법을 이용한 3차원 영상 정보 처리)

  • Min, Seong-Uk
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2005.07a
    • /
    • pp.150-151
    • /
    • 2005
  • Integral imaging is one of the three-dimensional(3D) display methods, which is an autostereoscopic method. The integral imaging system can provide volumetric 3D image which has both vertical and horizontal parallaxes. The elemental image which is obtained in the pickup process by lens array has the 3D information of the object and can be used for the depth perception and the 3D correlation. Moreover, the elemental image which represents a cyber-space can be generated by computer process.

  • PDF

Recent Trends of Weakly-supervised Deep Learning for Monocular 3D Reconstruction (단일 영상 기반 3차원 복원을 위한 약교사 인공지능 기술 동향)

  • Kim, Seungryong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.70-78
    • /
    • 2021
  • Estimating 3D information from a single image is one of the essential problems in numerous applications. Since a 2D image inherently might originate from an infinite number of different 3D scenes, thus 3D reconstruction from a single image is notoriously challenging. This challenge has been overcame by the advent of recent deep convolutional neural networks (CNNs), by modeling the mapping function between 2D image and 3D information. However, to train such deep CNNs, a massive training data is demanded, but such data is difficult to achieve or even impossible to build. Recent trends thus aim to present deep learning techniques that can be trained in a weakly-supervised manner, with a meta-data without relying on the ground-truth depth data. In this article, we introduce recent developments of weakly-supervised deep learning technique, especially categorized as scene 3D reconstruction and object 3D reconstruction, and discuss limitations and further directions.

Implementation of Real-time Stereoscopic Image Conversion Algorithm Using Luminance and Vertical Position (휘도와 수직 위치 정보를 이용한 입체 변환 알고리즘 구현)

  • Yun, Jong-Ho;Choi, Myul-Rul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.5
    • /
    • pp.1225-1233
    • /
    • 2008
  • In this paper, the 2D/3D converting algorithm is proposed. The single frame of 2D image is used fur the real-time processing of the proposed algorithm. The proposed algorithm creates a 3D image with the depth map by using the vertical position information of a object in a single frame. In order to real-time processing and improve the hardware complexity, it performs the generation of a depth map using the image sampling, the object segmentation with the luminance standardization and the boundary scan. It might be suitable to a still image and a moving image, and it can provide a good 3D effect on a image such as a long distance image, a landscape, or a panorama photo because it uses a vertical position information. The proposed algorithm can adapt a 3D effect to a image without the restrictions of the direction, velocity or scene change of an object. It has been evaluated with the visual test and the comparing to the MTD(Modified Time Difference) method using the APD(Absolute Parallax Difference).

Real-Time Virtual-View Image Synthesis Algorithm Using Kinect Camera (키넥트 카메라를 이용한 실시간 가상 시점 영상 생성 기법)

  • Lee, Gyu-Cheol;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.409-419
    • /
    • 2013
  • Kinect released by Microsoft in November 2010 is a motion sensing camera in xbox360 and gives depth and color images. However, Kinect camera also generates holes and noise around object boundaries in the obtained images because it uses infrared pattern. Also, boundary flickering phenomenon occurs. Therefore, we propose a real-time virtual-view video synthesis algorithm which results in a high-quality virtual view by solving these problems. In the proposed algorithm, holes around the boundary are filled by using the joint bilateral filter. Color image is converted into intensity image and then flickering pixels are searched by analyzing the variation of intensity and depth images. Finally, boundary flickering phenomenon can be reduced by converting values of flickering pixels into the maximum pixel value of a previous depth image and virtual views are generated by applying 3D warping technique. Holes existing on regions that are not part of occlusion region are also filled with a center pixel value of the highest reliability block after the final block reliability is calculated by using a block based gradient searching algorithm with block reliability. The experimental results show that the proposed algorithm generated the virtual view image in real-time.

Realtime 3D Human Full-Body Convergence Motion Capture using a Kinect Sensor (Kinect Sensor를 이용한 실시간 3D 인체 전신 융합 모션 캡처)

  • Kim, Sung-Ho
    • Journal of Digital Convergence
    • /
    • v.14 no.1
    • /
    • pp.189-194
    • /
    • 2016
  • Recently, there is increasing demand for image processing technology while activated the use of equipments such as camera, camcorder and CCTV. In particular, research and development related to 3D image technology using the depth camera such as Kinect sensor has been more activated. Kinect sensor is a high-performance camera that can acquire a 3D human skeleton structure via a RGB, skeleton and depth image in real-time frame-by-frame. In this paper, we develop a system. This system captures the motion of a 3D human skeleton structure using the Kinect sensor. And this system can be stored by selecting the motion file format as trc and bvh that is used for general purposes. The system also has a function that converts TRC motion captured format file into BVH format. Finally, this paper confirms visually through the motion capture data viewer that motion data captured using the Kinect sensor is captured correctly.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

A study on compensation of distorted 3D depth in the triple fresnel lenses floating image system

  • Lee, Kwnag-Hoon;Kim, Soo-Ho;Yoon, Young-Soo;Kim, Sung-Kyu
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1490-1493
    • /
    • 2007
  • We proposed the method to take 3D image having correct depths to the front and rear directions when the stereogram was displayed to an observer through an optical system. Since the magnified stereogram by lenses was not given correct depth to an observer despite having the same magnified disparity. Consequently, we achieved our goal by relations of compensated disparities to both directions with magnification of lenses, viewing distance and base distance of viewer in AFIS.

  • PDF

Real-Time Stereoscopic Image Conversion Using Motion Detection and Region Segmentation (움직임 검출과 영역 분할을 이용한 실시간 입체 영상 변환)

  • Kwon Byong-Heon;Seo Burm-suk
    • Journal of Digital Contents Society
    • /
    • v.6 no.3
    • /
    • pp.157-162
    • /
    • 2005
  • In this paper we propose real-time cocersion methods that can convert into stereoscopic image using depth map that is formed by motion detection extracted from 2-D moving image and region segmentation separated from image. Depth map which represents depth information of image and the proposed absolute parallax image are used as the measure of qualitative evaluation. We have compared depth information, parallax processing, and segmentation between objects with different depth for proposed and conventional method. As a result, we have confirmed the proposed method can offer realistic stereoscopic effect regardless of direction and velocity of moving object for a moving image.

  • PDF

3DTIP: 3D Stereoscopic Tour-Into-Picture of Korean Traditional Paintings (3DTIP: 한국 고전화의 3차원 입체 Tour-Into-Picture)

  • Jo, Cheol-Yong;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.616-624
    • /
    • 2009
  • This paper presents a 3D stereoscopic TIP (Tour Into Picture) for Korean classical paintings being composed of persons, boat, and landscape. Unlike conventional TIP methods providing 2D image or video, our proposed TIP can provide users with 3D stereoscopic contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method firstly makes input data being composed of foreground mask, background image, and depth map. The second step is to navigate the picture and to obtain rendered images by orthographic or perspective projection. Then, two depth enhancement schemes such as depth template and Laws depth are utilized in order to reduce a cardboard effect and thus to enhance 3D perceived depth of the foreground objects. In experiments, the proposed method was tested on 'Danopungjun' and 'Muyigido' that are famous paintings made in Chosun Dynasty. The stereoscopic animation was proved to deliver new 3D perception compared with 2D video.

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.