• Title/Summary/Keyword: 3D Depth

Search Result 2,619, Processing Time 0.031 seconds

Implicit Surface Representation of Three-Dimensional Face from Kinect Sensor

  • Wibowo, Suryo Adhi;Kim, Eun-Kyeong;Kim, Sungshin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.4
    • /
    • pp.412-417
    • /
    • 2015
  • Kinect sensor has two output data which are produced from red green blue (RGB) sensor and depth sensor, it is called color image and depth map, respectively. Although this device's prices are cheapest than the other devices for three-dimensional (3D) reconstruction, we need extra work for reconstruct a smooth 3D data and also have semantic meaning. It happened because the depth map, which has been produced from depth sensor usually have a coarse and empty value. Consequently, it can be make artifact and holes on the surface, when we reconstruct it to 3D directly. In this paper, we present a method for solving this problem by using implicit surface representation. The key idea for represent implicit surface is by using radial basis function (RBF) and to avoid the trivial solution that the implicit function is zero everywhere, we need to defined on-surface point and off-surface point. Based on our simulation results using captured face as an input, we can produce smooth 3D face and fill the holes on the 3D face surface, since RBF is good for interpolation and holes filling. Modified anisotropic diffusion is used to produced smoothed surface.

Post-processing of 3D Video Extension of H.264/AVC for a Quality Enhancement of Synthesized View Sequences

  • Bang, Gun;Hur, Namho;Lee, Seong-Whan
    • ETRI Journal
    • /
    • v.36 no.2
    • /
    • pp.242-252
    • /
    • 2014
  • Since July of 2012, the 3D video extension of H.264/AVC has been under development to support the multi-view video plus depth format. In 3D video applications such as multi-view and free-view point applications, synthesized views are generated using coded texture video and coded depth video. Such synthesized views can be distorted by quantization noise and inaccuracy of 3D wrapping positions, thus it is important to improve their quality where possible. To achieve this, the relationship among the depth video, texture video, and synthesized view is investigated herein. Based on this investigation, an edge noise suppression filtering process to preserve the edges of the depth video and a method based on a total variation approach to maximum a posteriori probability estimates for reducing the quantization noise of the coded texture video. The experiment results show that the proposed methods improve the peak signal-to-noise ratio and visual quality of a synthesized view compared to a synthesized view without post processing methods.

Robot System Design Capable of Motion Recognition and Tracking the Operator's Motion (사용자의 동작인식 및 모사를 구현하는 로봇시스템 설계)

  • Choi, Yonguk;Yoon, Sanghyun;Kim, Junsik;Ahn, YoungSeok;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.24 no.6
    • /
    • pp.605-612
    • /
    • 2015
  • Three dimensional (3D) position determination and motion recognition using a 3D depth sensor camera are applied to a developed penguin-shaped robot, and its validity and closeness are investigated. The robot is equipped with an Asus Xtion Pro Live as a 3D depth camera, and a sound module. Using the skeleton information from the motion recognition data extracted from the camera, the robot is controlled so as to follow the typical three mode-reactions formed by the operator's gestures. In this study, the extraction of skeleton joint information using the 3D depth camera is introduced, and the tracking performance of the operator's motions is explained.

A Development of Depth Budget Control Module for 3D Stereoscopic Image Contents (3차원 입체영상 콘텐츠 제작을 위한 깊이 제어 카메라 모듈 개발)

  • Seo, Chang-Ho;Youn, Joo-Sang;Seo, Jin-Seok;Kim, Nam-Gyu
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.01a
    • /
    • pp.201-203
    • /
    • 2015
  • 편안한 3차원 입체영상 콘텐츠 제작을 위해선 시청 환경 조건에 부합하는 최적의 제작 방법을 필요로 한다. 현재의 입체영상 제작 과정은 경험자의 지식이나 시청자 실험에 기반 한 가이드라인들을 활용하고 있으나, 특정 제작 환경에 국한되어 있다. 보다 구체적이고 정량적인 가이드라인 도출을 위해선 다양한 카메라 제어 및 시청 환경 요소를 고려한 실험용 입체영상이 제작되고, 그 실험 영상을 기반으로 다양한 시청자 실험 데이터가 구축되어야 한다. 또한, 실험용 입체영상 제작은 단기간에 이루어지고, 실험 목적 변경에 따라 변화 요인을 수용할 수 있어야 한다. 본 논문에서는 다양한 실험용 입체영상 제작을 위해, 상업용 3D 게임 엔진 저작 툴(Unity3D)에서 운용되고, 깊이예산(Depth Budget) 제어가 쉽게 가능한 입체 카메라 모듈을 구현하고, 구현된 모듈을 활용한 입체영상의 제작 예를 보여준다.

  • PDF

3D Multiple Objects Detection and Tracking on Accurate Depth Information for Pose Recognition (자세인식을 위한 정확한 깊이정보에서의 3차원 다중 객체검출 및 추적)

  • Lee, Jae-Won;Jung, Jee-Hoon;Hong, Sung-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.963-976
    • /
    • 2012
  • 'Gesture' except for voice is the most intuitive means of communication. Thus, many researches on how to control computer using gesture are in progress. User detection and tracking in these studies is one of the most important processes. Conventional 2D object detection and tracking methods are sensitive to changes in the environment or lights, and a mix of 2D and 3D information methods has the disadvantage of a lot of computational complexity. In addition, using conventional 3D information methods can not segment similar depth object. In this paper, we propose object detection and tracking method using Depth Projection Map that is the cumulative value of the depth and motion information. Simulation results show that our method is robust to changes in lighting or environment, and has faster operation speed, and can work well for detection and tracking of similar depth objects.

Computational Integral Imaging Reconstruction of 3D Object Using a Depth Conversion Technique

  • Shin, Dong-Hak;Kim, Eun-Soo
    • Journal of the Optical Society of Korea
    • /
    • v.12 no.3
    • /
    • pp.131-135
    • /
    • 2008
  • Computational integral imaging(CII) has the advantage of generating the volumetric information of the 3D scene without optical devices. However, the reconstruction process of CII requires increasingly larger sizes of reconstructed images and then the computational cost increases as the distance between the lenslet array and the reconstructed output plane increases. In this paper, to overcome this problem, we propose a novel CII method using a depth conversion technique. The proposed method can move a far 3D object near the lenslet array and reduce the computational cost dramatically. To show the usefulness of the proposed method, we carry out the preliminary experiment and its results are presented.

User Detection and Main Body Parts Estimation using Inaccurate Depth Information and 2D Motion Information (정밀하지 않은 깊이정보와 2D움직임 정보를 이용한 사용자 검출과 주요 신체부위 추정)

  • Lee, Jae-Won;Hong, Sung-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.611-624
    • /
    • 2012
  • 'Gesture' is the most intuitive means of communication except the voice. Therefore, there are many researches for method that controls computer using gesture input to replace the keyboard or mouse. In these researches, the method of user detection and main body parts estimation is one of the very important process. in this paper, we propose user objects detection and main body parts estimation method on inaccurate depth information for pose estimation. we present user detection method using 2D and 3D depth information, so this method robust to changes in lighting and noise and 2D signal processing 1D signals, so mainly suitable for real-time and using the previous object information, so more accurate and robust. Also, we present main body parts estimation method using 2D contour information, 3D depth information, and tracking. The result of an experiment, proposed user detection method is more robust than only using 2D information method and exactly detect object on inaccurate depth information. Also, proposed main body parts estimation method overcome the disadvantage that can't detect main body parts in occlusion area only using 2D contour information and sensitive to changes in illumination or environment using color information.

Effect on Maintenance of Vertical Profile of Stream for Triangle-Type Labyrinth Weir (삼각형 래버린스 위어의 수심유지 효과)

  • Lee, Seung-Oh;Kim, Young-Ho;Im, Jang-Hyuk
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.9 no.3
    • /
    • pp.107-115
    • /
    • 2009
  • The labyrinth weir can be applied to increase the overflow rate, maintain constant water depth and improve water quality. This weir can be defined that the plane shape of overflow part is not straight line and is a kind of weir having overflow length increased by changing its plane shape. There are relatively few studies related to effect of maintaining the water depth which has been used to consider for various functions as hydraulic facilities and design conditions of labyrinth weirs. Thus, it is needed to conduct studies related to the maintenance of water depth by the labyrinth weir. This study was to provide fundamental data which may become a facilitator for more accurate and proper design of hydraulic facilities related to the maintenance of water depth. The ranges of constant water depth ($H_t/P=0.08\sim0.27$) were provided for the triangle type labyrinth weir, and the effect of maintaining water depth was analyzed using hydraulic laboratory experiments and 3D-numerical simulations(Flow-3D).

Stereoscopic Conversion of Monoscopic Video using Edge Direction Histogram (에지 방향성 히스토그램을 이용한 2차원 동영상의 3차원 입체변환기법)

  • Kim, Jee-Hong;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.8C
    • /
    • pp.782-789
    • /
    • 2009
  • In this paper, we propose an algorithm for creating stereoscopic video from a monoscopic video. Parallel straight lines in a 3D space get narrower as they are farther from the perspective images on a 2D plane and finally meet at one point that is called a vanishing point. A viewer uses depth perception clues called a vanishing point which is the farthest from a viewer's viewpoint in order to perceive depth information from objects and surroundings thereof to the viewer. The viewer estimates the vanishing point with geometrical features in monoscopic images, and can perceive the depth information with the relationship between the position of the vanishing point and the viewer's viewpoint. In this paper, we propose a method to estimate a vanishing point with edge direction histogram in a general monoscopic image and to create a depth map depending on the position of the vanishing point. With the conversion method proposed through the experimental results, it is seen that stable stereoscopic conversion of a given monoscopic video is achieved.

3D Reconstruction Using a Single Camera (단일 카메라를 이용한 3차원 공간 정보 생성)

  • Kwon, Oh-Young;Seo, Kyoung-Taek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.12
    • /
    • pp.2943-2948
    • /
    • 2015
  • Run 3D reconstruction using a single camera, based on the information, we are advancing research on driving assistance apparatus or can be informed how to pass the obstacle existing ahead the driver. As a result depth information falls but it is possible to provide information that can pass through an obstacle on the straight. For 3D reconstruction by measuring the internal parameters, it calculates the Fundamental matrix and matching to find the feature points obtained by executing the triangulation on the basis of this. When the through experiments try to confirm the results, the depth information is present error information in the X and Y axes which can determine whether or not to pass through an obstacle has reliability.