• Title, Summary, Keyword: 깊이

Search Result 6,837, Processing Time 0.048 seconds

Generating High Resolution Depth Map from Low Resolution Depth Map (저해상도 깊이맵으로부터 고해상도 깊이맵의 생성)

  • Jang, Seong Eun;Kim, Manbae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • /
    • pp.137-138
    • /
    • 2011
  • 최근 깊이센서가 컴퓨터비전 등의 영상처리 분야에서 다양하게 활용되고 있다. 그러나 깊이센서에서 생성된 깊이맵의 해상도가 낮기 때문에 고해상도로 상향변환이 필요하다. 현재까지 저해상도의 깊이맵을 고해상도의 깊이맵으로 변환하는 방법들이 많이 제안되었다. 하지만 이러한 방법들은 객체의 에지 개선에만 국한되어 있다. 따라서 본 논문에서는 객체의 에지 뿐만아니라, 객체의 내부를 개선하는 방법을 제안한다. 제안방법은 기존에서 활용되어 온 보간법들에 고주파 성분을 적용하여 개선된 고해상도 깊이맵을 얻는다.

  • PDF

Multi-GPU based Fast Multi-view Depth Map Generation Method (다중 GPU 기반의 고속 다시점 깊이맵 생성 방법)

  • Ko, Eunsang;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • /
    • pp.236-239
    • /
    • 2014
  • 3차원 영상을 제작하기 위해서는 여러 시점의 색상 영상과 함께 깊이 정보를 필요로 한다. 하지만 깊이 정보를 얻을 때 사용하는 ToF 카메라는 해상도가 낮으며 적외선 신호의 주파수 문제 때문에 최대 3대까지 사용할 수 있다. 따라서 깊이 정보를 색상 영상과 함께 사용하기 위해서 깊이 정보의 업샘플링이 필수적이다. 업샘플링은 깊이 정보를 색상 카메라 위치로 3차원 워핑하고 결합형 양방향 필터(joint bilateral filter, JBF)를 사용하여 빈 영역을 채우는 방법으로 진행된다. 업샘플링은 오랜 시간이 소요되지만 그래픽스 프로세싱 유닛(graphics processing units, GPU)를 이용하여 빠르게 수행될 수 있다. 본 논문에서는 다중 GPU의 병렬 수행을 통하여 빠르게 다시점 깊이맵을 생성할 수 있는 방법을 제안한다. 다중 GPU 병렬 수행은 범용 목적 GPU(general purpose computing on GPU, GPGPU) 중의 하나인 CUDA를 이용하였으며, 본 논문에서 제안된 방법을 이용하여 3개의 GPU 사용한 실험 결과 초당 35 프레임의 다시점 깊이맵을 생성했다.

  • PDF

Hole Filling Technique for Depth Map using Color Image Pixel Clustering (컬러 영상 화소 분류를 이용한 깊이 영상의 홀을 채우는 기법)

  • Lee, Geon-Won;Han, Jong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • /
    • pp.55-57
    • /
    • 2020
  • 실감미디어의 수요가 높아짐에 따라, 실감 미디어 컨텐츠 제작에 반드시 필요한 깊이영상에 대한 중요성이 커지고 있다. 다시점 영상으로부터 계산된 깊이 영상은 물체 주위와 배경 영역에 홀을 가지고 있다. 이러한 깊이영상에서의 홀을 채울 때, 이에 대응하는 컬러영상의 색상 특성을 고려하는 방법을 제안한다. 본 논문에서는 컬러 영상의 화소들을 색상 유사성을 이용하여 클래스로 분류하고, 홀의 깊이정보를 예측할 때 같은 클래스의 유효한 깊이값 만을 사용하는 방법을 소개한다. 제안하는 방법을 사용하면 깊이영상의 홀을 효율적으로 채워 넣을 수 있다. 실감미디어 제작에 있어 제안하는 방법을 사용한다면, 사실감 있는 깊이 정보를 얻을 수 있다.

  • PDF

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

Effect of Plowing Depth on Growth and Tuber Yield in C. auriculatum Introduced from China (경운깊이가 중국도입종 넓은잎큰조롱의 생육 및 근수량에 미치는 영향)

  • Nam, Sang-Yeong;Kim, Min-Ja;Kim, In-Jae;Lee, Jeong-Kwan;Rho, Chang-Woo;Yun, Tae;Min, Kyeong-Beom
    • Korean Journal of Medicinal Crop Science
    • /
    • v.16 no.2
    • /
    • pp.69-73
    • /
    • 2008
  • Fields experiments were conducted to investigate the effect of various tillage depth (TD) on productivity and quality of C. auriculatum Royle ex Wight from 2005 to 2006. The length of vine was elongated in lower TD treatments as 50 cm longer in 10 cm TD than 30 cm TD, and stem diameter and dry weight had increased in the lower TD. Length, width, and weight of leaves showed the quantitive growth in the lower TD treatments, but the chlorophyll content had increased in the deeper TD treatments. Root number and length had increased in the deeper TD treatments, but the root diameter and decomposed root was increased in the deeper TD. The total yield of root showed the increasing tendency in the deeper TD treatments as 6.2 ton/ha in 10 cm TD and increased as $7{\sim}9%$ in the 20 cm TD treatments.

The Enhancement of the Boundary-Based Depth Image (경계 기반의 깊이 영상 개선)

  • Ahn, Yang-Keun;Hong, Ji-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.4
    • /
    • pp.51-58
    • /
    • 2012
  • Recently, 3D technology based on depth image is widely used in various fields including 3D space recognition, image acquisition, interaction, and games. Depth camera is used in order to produce depth image, various types of effort are made to improve quality of the depth image. In this paper, we suggests using area-based Canny edge detector to improve depth image in applying 3D technology based on depth camera. The suggested method provides improved depth image with pre-processing and post-processing by fixing image quality deterioration, which may take place in acquiring depth image in a limited environment. For objective image quality evaluation, we have confirmed that the image is improved by 0.42dB at maximum, by applying and comparing improved depth image to virtual view reference software. In addition, with DSCQS(Double Stimulus Continuous Quality Scale) evaluation method, we are reassured of the effectiveness of improved depth image through objective evaluation of subjective quality.

Stereoscopic Effect of 3D images according to the Quality of the Depth Map and the Change in the Depth of a Subject (깊이맵의 상세도와 주피사체의 깊이 변화에 따른 3D 이미지의 입체효과)

  • Lee, Won-Jae;Choi, Yoo-Joo;Lee, Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.29-42
    • /
    • 2013
  • In this paper, we analyze the effect of the depth perception, volume perception and visual discomfort according to the change of the quality of the depth image and the depth of the major object. For the analysis, a 2D image was converted to eighteen 3D images using depth images generated based on the different depth position of a major object and background, which were represented in three detail levels. The subjective test was carried out using eighteen 3D images so that the degrees of the depth perception, volume perception and visual discomfort recognized by the subjects were investigated according to the change in the depth position of the major object and the quality of depth map. The absolute depth position of a major object and the relative depth difference between background and the major object were adjusted in three levels, respectively. The details of the depth map was also represented in three levels. Experimental results showed that the quality of the depth image differently affected the depth perception, volume perception and visual discomfort according to the absolute and relative depth position of the major object. In the case of the cardboard depth image, it severely damaged the volume perception regardless of the depth position of the major object. Especially, the depth perception was also more severely deteriorated by the cardboard depth image as the major object was located inside the screen than outside the screen. Furthermore, the subjects did not felt the difference of the depth perception, volume perception and visual comport from the 3D images generated by the detail depth map and by the rough depth map. As a result, it was analyzed that the excessively detail depth map was not necessary for enhancement of the stereoscopic perception in the 2D-to-3D conversion.

  • PDF

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Depth location extraction and three-dimensional image recognition by use of holographic information of an object (홀로그램 정보를 이용한 깊이위치 추출과 3차원 영상인식)

  • 김태근
    • Korean Journal of Optics and Photonics
    • /
    • v.14 no.1
    • /
    • pp.51-57
    • /
    • 2003
  • The hologram of an object contains the information of the object's depth distribution as well as the depth location of the object. However these pieces of information are blended together as a form of fringe pattern. This makes it hard to extract the depth location of the object directly from the hologram. In this paper, I propose a numerical method which separates the depth location information from the single-sideband hologram by gaussian low-pass filtering. The depth location of the object is extracted by numerical analysis of the filtered hologram. The hologram at the object's depth location is recovered by the extracted depth location.

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.