• Title/Summary/Keyword: 3D Depth

Search Result 2,619, Processing Time 0.032 seconds

Depth Boundary Sharpening for Improved 3D View Synthesis (3차원 합성영상의 화질 개선을 위한 깊이 경계 선명화)

  • Song, Yunseok;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37A no.9
    • /
    • pp.786-791
    • /
    • 2012
  • This paper presents a depth boundary sharpening method for improved view synthesis in 3D video. In depth coding, distortion occurs around object boundaries, degrading the quality of synthesized images. In order to encounter this problem, the proposed method estimates an edge map for each frame to filter only the boundary regions. In particular, a window-based filter is employed to choose the most reliable pixel as the replacement considering three factors: frequency, similarity and closeness. The proposed method was implemented as post-processing of the deblocking filter in JMVC 8.3.Compared to the conventional methods, the proposed method generated 0.49 dB PSNR increase and 16.58% bitrate decrease on average. The improved portions were subjectively confirmed as well.

A study on compensation of distorted 3D depth in the triple fresnel lenses floating image system

  • Lee, Kwnag-Hoon;Kim, Soo-Ho;Yoon, Young-Soo;Kim, Sung-Kyu
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1490-1493
    • /
    • 2007
  • We proposed the method to take 3D image having correct depths to the front and rear directions when the stereogram was displayed to an observer through an optical system. Since the magnified stereogram by lenses was not given correct depth to an observer despite having the same magnified disparity. Consequently, we achieved our goal by relations of compensated disparities to both directions with magnification of lenses, viewing distance and base distance of viewer in AFIS.

  • PDF

A Study on Octree Construction Algorithm for 3D Objects (3차원 물체에 대한 8진 트리 구성 알고리즘에 관한 연구)

  • 최윤호;송유진;홍민석;박상희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.1
    • /
    • pp.1-10
    • /
    • 1992
  • This study presents a complete octree construction algorithm for 2D depth images obtained from orthogonal face views, which can represent 3D objects exactly. In constructing quadtree, optimal quadtree construction algorithm is applied to depth images for efficient use of memory and reduction of tree construction time. In addition, pseudo-octrees are constructed by using our proposed method, which construct pseudo-octrees according to the resolution value given in each node of constructed quadtree and mapping relation between quadrants and octants. Finally, a complete octree, which represents a 3D object, is constructed by volume intersection with each constructed pseudo-octree. The representation accuracy of a complete octree constructed by our algorithm is investigated by using a 3D display method and a volume ratio method for a complete octree.

  • PDF

Contribution of color to perception of 2D and 3D motion

  • Shioiri, Satoshi
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1152-1153
    • /
    • 2009
  • Although motion impression is weak with isoluminant color stimuli, it has been shown that color signals influence motion perception. We discuss similarities and differences between color motion and luminance motion, focusing on temporal characteristics of the perception of the 2D and 3D motion.

  • PDF

Depth-Map Generation using Fusion of Foreground Depth Map and Background Depth Map (전경 깊이 지도와 배경 깊이 지도의 결합을 이용한 깊이 지도 생성)

  • Kim, Jin-Hyun;Baek, Yeul-Min;Kim, Whoi-Yul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.275-278
    • /
    • 2012
  • 본 논문에서 2D-3D 자동 영상 변환을 위하여 2D 상으로부터 깊이 지도(depth map)을 생성하는 방법을 제안한다. 제안하는 방법은 보다 정확한 깊이 지도 생성을 위해 영상의 전경 깊이 지도(foreground depth map)와 배경 깊이 지도(background depth map)를 각각 생성 한 후 결합함으로써 보다 정확한 깊이 지도를 생성한다. 먼저, 전경 깊이 지도를 생성하기 위해서 라플라시안 피라미드(laplacian pyramid)를 이용하여 포커스/디포커스 깊이 지도(focus/defocus depth map)를 생성한다. 그리고 블록정합(block matching)을 통해 획득한 움직임 시차(motion parallax)를 이용하여 움직임 시차 깊이 지도를 생성한다. 포커스/디포커스 깊이 지도는 평탄영역(homogeneous region)에서 깊이 정보를 추출하지 못하고, 움직임 시차 깊이 지도는 움직임 시차가 발생하지 않는 영상에서 깊이 정보를 추출하지 못한다. 이들 깊이 지도를 결합함으로써 각 깊이 지도가 가지는 문제점을 해결하였다. 선형 원근감(linear perspective)와 선 추적(line tracing) 방법을 적용하여 배경깊이 지도를 생성한다. 이렇게 생성된 전경 깊이 지도와 배경 깊이 지도를 결합하여 보다 정확한 깊이 지도를 생성한다. 실험 결과, 제안하는 방법은 기존의 방법들에 비해 더 정확한 깊이 지도를 생성하는 것을 확인할 수 있었다.

  • PDF

3D Image Processing for Recognition and Size Estimation of the Fruit of Plum(Japanese Apricot) (3D 영상을 활용한 매실 인식 및 크기 추정)

  • Jang, Eun-Chae;Park, Seong-Jin;Park, Woo-Jun;Bae, Yeonghwan;Kim, Hyuck-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.130-139
    • /
    • 2021
  • In this study, size of the fruit of Japanese apricot (plum) was estimated through a plum recognition and size estimation program using 3D images in order to control the Eurytoma maslovskii that causes the most damage to plum in a timely manner. In 2018, night shooting was carried out using a Kinect 2.0 Camera. For night shooting in 2019, a RealSense Depth Camera D415 was used. Based on the acquired images, a plum recognition and estimation program consisting of four stages of image preprocessing, sizeable plum extraction, RGB and depth image matching and plum size estimation was implemented using MATLAB R2018a. The results obtained by running the program on 10 images produced an average plum recognition error rate of 61.9%, an average plum recognition error rate of 0.5% and an average size measurement error rate of 3.6%. The continued development of these plum recognition and size estimation programs is expected to enable accurate fruit size monitoring in the future and the development of timely control systems for Eurytoma maslovskii.

Terahertz Nondestructive Time-of-flight Imaging with a Large Depth Range

  • Kim, Hwan Sik;Kim, Jangsun;Ahn, Yeong Hwan
    • Current Optics and Photonics
    • /
    • v.6 no.6
    • /
    • pp.619-626
    • /
    • 2022
  • In this study, we develop a three-dimensional (3D) terahertz time-of-flight (THz-TOF) imaging technique with a large depth range, based on asynchronous optical sampling (ASOPS) methods. THz-TOF imaging with the ASOPS technique enables rapid scanning with a time-delay span of 10 ns. This means that a depth range of 1.5 m is possible in principle, whereas in practice it is limited by the focus depth determined by the optical geometry, such as the focal length of the scan lens. We characterize the spatial resolution of objects at different vertical positions with a focal length of 5 cm. The lateral resolution varies from 0.8-1.8 mm within the vertical range of 50 mm. We obtain THz-TOF images for samples with multiple reflection layers; the horizontal and vertical locations of the objects are successfully determined from the 2D cross-sectional images, or from reconstructed 3D images. For instance, we can identify metallic objects embedded in insulating enclosures having a vertical depth range greater than 30 mm. For feasible practical use, we employ the proposed technique to locate a metallic object within a thick chocolate bar, which is not accessible via conventional transmission geometry.

3D Reconstruction of an Indoor Scene Using Depth and Color Images (깊이 및 컬러 영상을 이용한 실내환경의 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.53-61
    • /
    • 2006
  • In this paper, we propose a novel method for 3D reconstruction of an indoor scene using a multi-view camera. Until now, numerous disparity estimation algorithms have been developed with their own pros and cons. Thus, we may be given various sorts of depth images. In this paper, we deal with the generation of a 3D surface using several 3D point clouds acquired from a generic multi-view camera. Firstly, a 3D point cloud is estimated based on spatio-temporal property of several 3D point clouds. Secondly, the evaluated 3D point clouds, acquired from two viewpoints, are projected onto the same image plane to find correspondences, and registration is conducted through minimizing errors. Finally, a surface is created by fine-tuning 3D coordinates of point clouds, acquired from several viewpoints. The proposed method reduces the computational complexity by searching for corresponding points in 2D image plane, and is carried out effectively even if the precision of 3D point cloud is relatively low by exploiting the correlation with the neighborhood. Furthermore, it is possible to reconstruct an indoor environment by depth and color images on several position by using the multi-view camera. The reconstructed model can be adopted for interaction with as well as navigation in a virtual environment, and Mediated Reality (MR) applications.

  • PDF

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF