• 제목/요약/키워드: 3D Depth

검색결과 2,619건 처리시간 0.031초

Multi-Focusing Image Capture System for 3D Stereo Image (3차원 영상을 위한 다초점 방식 영상획득장치)

  • Ham, Woon-Chul;Kwon, Hyeok-Jae;Enkhbaatar, Tumenjargal
    • The Journal of Korea Robotics Society
    • /
    • 제6권2호
    • /
    • pp.118-129
    • /
    • 2011
  • In this paper, we suggest a new camera capturing and synthesizing algorithm with the multi-captured left and right images for the better comfortable feeling of 3D depth and also propose 3D image capturing hardware system based on the this new algorithm. We also suggest the simple control algorithm for the calibration of camera capture system with zooming function based on a performance index measure which is used as feedback information for the stabilization of focusing control problem. We also comment on the theoretical mapping theory concerning projection under the assumption that human is sitting 50cm in front of and watching the 3D LCD screen for the captured image based on the modeling of pinhole Camera. We choose 9 segmentations and propose the method to find optimal alignment and focusing based on the measure of alignment and sharpness and propose the synthesizing fusion with the optimized 9 segmentation images for the best 3D depth feeling.

Depth Extraction of Integral Imaging Using Correlation (상관관계를 활용한 집적 영상의 깊이 추출 방법)

  • Kim, Youngjun;Cho, Ki-Ok;Kim, Cheolsu;Cho, Myungjin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제20권7호
    • /
    • pp.1369-1375
    • /
    • 2016
  • In this paper, we present a depth extraction method of integral imaging using correlation between elemental images with phase only filter. Integral imaging is a passive three-dimensional (3D) imaging system records ray information of 3D objects through lenslet array by 2D image sensor, and displays 3D images by using the similar lenslet array. 2D images by lenslet array have different perspectives. These images are referred to as elemental images. Since the correlation can be calculated between elemental images, the depth information of 3D objects can be extracted. To obtain high correaltion between elemental images effectively, in this paper, we use phase only filter. Using this high correlation, the corresponding pixels between elemental images can be found so that depth information can be extracted by computational reconstruction technique. In this paper, to prove our method, we carry out optical experiment and calculate Peak Sidelobe Ratio (PSR) as a correlation metric.

Study on torso patterns for elderly obese women for vitalization of the silver clothing industry - Applying the CLO 3D program - (실버 의류산업 활성화를 위한 노년 비만여성의 토르소 원형 연구 - CLO 3D 가상착의 시스템 활용 -)

  • Seong, Ok jin;Ha, Hee Jung
    • The Research Journal of the Costume Culture
    • /
    • 제25권4호
    • /
    • pp.476-487
    • /
    • 2017
  • The purpose of this study was to suggest torso patterns that fit the three main body shapes of elderly obese women. To reduce time, costs, and also the trial and error needed to make patterns, the CLO program for 3D test wear was employed. Three virtual models for aged obese women were use, with the YUKA system used to produce torso patterns. 3D simulation of test wear and corrections was done to design optimal torso patterns. The results were as follows: First, for the three models of obese women's body shapes as realized by CLO 3D, Type 1 is lower-body obesity shapes, Type 2 is abdominal obesity shapes, and Type 3 is whole-body obesity shapes. Second, to design the study patterns, actual measurement values, back waist length and waist to hip length, were used. The armhole depth (B/4-1.5), front interscye (B/6+2.3), front neck width (B/12-0.5), front neck depth (B/12+0.5), front waist measurement (W/4+ 1.5+D), front hip measurement (H/4+2+0.5), and back hip measurement (H/4+3-0.5) were calculated using formulas. Third, according to the results of test-wearing the study patterns, reduced front neck width and depth improved the neck fit and reduced armhole depth bettered loose or plunging armhole girth and also reduced the sagging of bust c.. Also, tight sidesfrom aprotruded waist and abdomen improved with the increase of surpluses in the back waist and also back and front hip c. The exterior was enhanced by displacement of back and front darts, which distributed surpluses better.

A Study of Using the Magnifying Lens to Detect the Detailed 3D Data in the Stereo Vision (양안입체시에서 3차원 정밀 데이터를 얻기 위한 확대경 사용에 관한 연구)

  • Cha, Kuk-Chan
    • Journal of Korea Multimedia Society
    • /
    • 제9권10호
    • /
    • pp.1296-1303
    • /
    • 2006
  • The range-based method is easy to get the 3D data in detail, but the image-based is not. In this paper, I suggests the new approach to get the 3D data in detail from the magnified stereo image. Main idea is using the magnifying lens. The magnifying lens not only magnifies the object but also increases the depth resolution. The relation between the amplification of the disparity and the increase of the depth resolution is verified mathematically and the method to improve the original 3D data is suggested.

  • PDF

A Study on Depth Information Acquisition Improved by Gradual Pixel Bundling Method at TOF Image Sensor

  • Kwon, Soon Chul;Chae, Ho Byung;Lee, Sung Jin;Son, Kwang Chul;Lee, Seung Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제7권1호
    • /
    • pp.15-19
    • /
    • 2015
  • The depth information of an image is used in a variety of applications including 2D/3D conversion, multi-view extraction, modeling, depth keying, etc. There are various methods to acquire depth information, such as the method to use a stereo camera, the method to use the depth camera of flight time (TOF) method, the method to use 3D modeling software, the method to use 3D scanner and the method to use a structured light just like Microsoft's Kinect. In particular, the depth camera of TOF method measures the distance using infrared light, whereas TOF sensor depends on the sensitivity of optical light of an image sensor (CCD/CMOS). Thus, it is mandatory for the existing image sensors to get an infrared light image by bundling several pixels; these requirements generate a phenomenon to reduce the resolution of an image. This thesis proposed a measure to acquire a high-resolution image through gradual area movement while acquiring a low-resolution image through pixel bundling method. From this measure, one can obtain an effect of acquiring image information in which illumination intensity (lux) and resolution were improved without increasing the performance of an image sensor since the image resolution is not improved as resolving a low-illumination intensity (lux) in accordance with the gradual pixel bundling algorithm.

3D Image Display Method using Synthetic Aperture integral imaging (Synthetic aperture 집적 영상을 이용한 3D 영상 디스플레이 방법)

  • Shin, Dong-Hak;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제16권9호
    • /
    • pp.2037-2042
    • /
    • 2012
  • Synthetic aperture integral imaging is one of promising 3D imaging techniques to capture the high-resolution elemental images using multiple cameras. In this paper, we propose a method of displaying 3D images in space using the synthetic aperture integral imaging technique. Since the elemental images captured from SAII cannot be directly used to display 3D images in an integral imaging display system, we first extract the depth map from elemental images and then transform them to novel elemental images for 3D image display. The newly generated elemental images are displayed on a display panel to generate 3D images in space. To show the usefulness of the proposed method, we carry out the preliminary experiments using a 3D toy object and present the experimental results.

Depth Image Interpolation using Fusion of color and depth Information (고품질의 고해상도 깊이 영상을 위한 컬러 영상과 깊이 영상을 결합한 깊이 영상 보간법)

  • Kim, Ji-Hyun;Choi, Jin-Wook;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 한국방송공학회 2011년도 추계학술대회
    • /
    • pp.8-10
    • /
    • 2011
  • 3D 콘텐츠를 획득하는 여러 가지 방법 중 2D-plus-Depth 구조는 다시점 영상을 얻을 수 있는 장점 때문에 최근 이에 관한 연구가 활발히 진행되고 있다. 이 구조를 통해서 고품질의 3D영상을 얻기 위해서는 무엇보다 고품질의 깊이 영상을 구현하는 것이 중요하다. 깊이 영상을 얻기 위해서 Time-of-Flight(ToF)방식의 깊이 센서가 활용되고 있는데 이 깊이 센서는 실시간으로 깊이 정보를 획득할 수 있지만 낮은 해상도와 노이즈가 발생한다는 단점이 있다. 따라서 깊이 영상의 특성을 보존하는 상환 변환을 하여야지만 고품질의 3D 콘텐츠를 제작할 수 있다. 주로 깊이 영상의 해상도를 높이기 위해서 Joint Bilateral Upsampling(JBU) 방식이 사용되고 있다. 하지만 이 방식은 4배 이상의 고 해상도 깊이 영상을 획득하는 데에는 적합하지 않다. 따라서 고해상도의 깊이 영상을 얻기 위해서 보간법을 수행하여 가이드 영상을 만든 후 Bilateral Filtering(BF)을 처리함으로써 영상의 품질을 향상시킨다. 본 논문에서는 2D-plus-Depth 구조에서 얻은 컬러 영상과 깊이 영상을 결합한 보간법을 통해서 깊이 영상의 특성을 살린 가이드 영상을 구현하는 방법을 제안한다. 실험 결과는 제안 방법이 기존 보간법보다 경계 영역 및 평활한 영역에서 깊이 영상의 특성을 잘 보존하는 것을 보여준다.

  • PDF

3D Panorama Generation Using Depth-MapStitching

  • Cho, Seung-Il;Kim, Jong-Chan;Ban, Kyeong-Jin;Park, Kyoung-Wook;Kim, Chee-Yong;Kim, Eung-Kon
    • Journal of information and communication convergence engineering
    • /
    • 제9권6호
    • /
    • pp.780-784
    • /
    • 2011
  • As the popularization and development of 3D display makes common users easy to experience a solid 3D virtual reality, the demand for virtual reality contents are increasing. In this paper, we propose 3D panorama system using vanishing point locationbased depth map generation method. 3D panorama using depthmap stitching gives an effect that makes users feel staying at real place and looking around nearby circumstances. Also, 3D panorama gives free sight point for both nearby object and remote one and provides solid 3D video.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • 제31권4호
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.