• Title/Summary/Keyword: 깊이

Search Result 7,637, Processing Time 0.035 seconds

The Enhancement of the Boundary-Based Depth Image (경계 기반의 깊이 영상 개선)

  • Ahn, Yang-Keun;Hong, Ji-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.4
    • /
    • pp.51-58
    • /
    • 2012
  • Recently, 3D technology based on depth image is widely used in various fields including 3D space recognition, image acquisition, interaction, and games. Depth camera is used in order to produce depth image, various types of effort are made to improve quality of the depth image. In this paper, we suggests using area-based Canny edge detector to improve depth image in applying 3D technology based on depth camera. The suggested method provides improved depth image with pre-processing and post-processing by fixing image quality deterioration, which may take place in acquiring depth image in a limited environment. For objective image quality evaluation, we have confirmed that the image is improved by 0.42dB at maximum, by applying and comparing improved depth image to virtual view reference software. In addition, with DSCQS(Double Stimulus Continuous Quality Scale) evaluation method, we are reassured of the effectiveness of improved depth image through objective evaluation of subjective quality.

Effect of Plowing Depth on Growth and Tuber Yield in C. auriculatum Introduced from China (경운깊이가 중국도입종 넓은잎큰조롱의 생육 및 근수량에 미치는 영향)

  • Nam, Sang-Yeong;Kim, Min-Ja;Kim, In-Jae;Lee, Jeong-Kwan;Rho, Chang-Woo;Yun, Tae;Min, Kyeong-Beom
    • Korean Journal of Medicinal Crop Science
    • /
    • v.16 no.2
    • /
    • pp.69-73
    • /
    • 2008
  • Fields experiments were conducted to investigate the effect of various tillage depth (TD) on productivity and quality of C. auriculatum Royle ex Wight from 2005 to 2006. The length of vine was elongated in lower TD treatments as 50 cm longer in 10 cm TD than 30 cm TD, and stem diameter and dry weight had increased in the lower TD. Length, width, and weight of leaves showed the quantitive growth in the lower TD treatments, but the chlorophyll content had increased in the deeper TD treatments. Root number and length had increased in the deeper TD treatments, but the root diameter and decomposed root was increased in the deeper TD. The total yield of root showed the increasing tendency in the deeper TD treatments as 6.2 ton/ha in 10 cm TD and increased as $7{\sim}9%$ in the 20 cm TD treatments.

Stereoscopic Effect of 3D images according to the Quality of the Depth Map and the Change in the Depth of a Subject (깊이맵의 상세도와 주피사체의 깊이 변화에 따른 3D 이미지의 입체효과)

  • Lee, Won-Jae;Choi, Yoo-Joo;Lee, Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.29-42
    • /
    • 2013
  • In this paper, we analyze the effect of the depth perception, volume perception and visual discomfort according to the change of the quality of the depth image and the depth of the major object. For the analysis, a 2D image was converted to eighteen 3D images using depth images generated based on the different depth position of a major object and background, which were represented in three detail levels. The subjective test was carried out using eighteen 3D images so that the degrees of the depth perception, volume perception and visual discomfort recognized by the subjects were investigated according to the change in the depth position of the major object and the quality of depth map. The absolute depth position of a major object and the relative depth difference between background and the major object were adjusted in three levels, respectively. The details of the depth map was also represented in three levels. Experimental results showed that the quality of the depth image differently affected the depth perception, volume perception and visual discomfort according to the absolute and relative depth position of the major object. In the case of the cardboard depth image, it severely damaged the volume perception regardless of the depth position of the major object. Especially, the depth perception was also more severely deteriorated by the cardboard depth image as the major object was located inside the screen than outside the screen. Furthermore, the subjects did not felt the difference of the depth perception, volume perception and visual comport from the 3D images generated by the detail depth map and by the rough depth map. As a result, it was analyzed that the excessively detail depth map was not necessary for enhancement of the stereoscopic perception in the 2D-to-3D conversion.

  • PDF

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Dense-Depth Map Estimation with LiDAR Depth Map and Optical Images based on Self-Organizing Map (라이다 깊이 맵과 이미지를 사용한 자기 조직화 지도 기반의 고밀도 깊이 맵 생성 방법)

  • Choi, Hansol;Lee, Jongseok;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.283-295
    • /
    • 2021
  • This paper proposes a method for generating dense depth map using information of color images and depth map generated based on lidar based on self-organizing map. The proposed depth map upsampling method consists of an initial depth prediction step for an area that has not been acquired from LiDAR and an initial depth filtering step. In the initial depth prediction step, stereo matching is performed on two color images to predict an initial depth value. In the depth map filtering step, in order to reduce the error of the predicted initial depth value, a self-organizing map technique is performed on the predicted depth pixel by using the measured depth pixel around the predicted depth pixel. In the process of self-organization map, a weight is determined according to a difference between a distance between a predicted depth pixel and an measured depth pixel and a color value corresponding to each pixel. In this paper, we compared the proposed method with the bilateral filter and k-nearest neighbor widely used as a depth map upsampling method for performance comparison. Compared to the bilateral filter and the k-nearest neighbor, the proposed method reduced by about 6.4% and 8.6% in terms of MAE, and about 10.8% and 14.3% in terms of RMSE.

Boundary Noise Removal in Synthesized Intermediate Viewpoint Images for 3D Video (3차원 비디오의 중간시점 합성영상의 경계 잡음 제거 방법)

  • Lee, Cheon;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2008.11a
    • /
    • pp.109-112
    • /
    • 2008
  • 최근 MPEG(moving picture experts group)에서 표준화를 진행하고 있는 3차원 비디오 시스템은 다시점 영상과 깊이영상을 동시에 이용하여 사용자가 임의의 시점을 선택하거나 스테레오스코픽 장치와 같은 3차원 영상 재생장 치를 동해 3차원 영상을 제공하는 차세대 방송 시스템이다 제한된 시점수를 이용하여 보다 많은 시점의 영상을 제공하려면 중간시점의 영상을 보간하는 장치가 필수적이다. 이 시스템의 입력정보인 깊이값을 이용하면 시점이동을 쉽게 할 수 있는데, 보간한 영상의 화질은 이 깊이값의 정확도에 따라 결정된다. 깊이맵은 대개 컴퓨터 비전을 기반으로 한 스테레오 정합기술을 이용 획득하는데, 객체의 경계와 같은 깊이값 불연속 영역에서 주로 깊이값 오류가 발생하게 된다. 이런 오류는 생성한 중간영상의 배경에 원치 않는 잡음을 발생시킨다. 기존의 방법에서는 측정한 깊이법의 객체 경계와 영상의 객체 경계가 일치한다는 가정으로 중간영상을 합성했다. 그러나 실제로는 깊이값 측정 과정에서 두 가지 경계가 일치하지 않아 전경의 일부분이 배경으로 합성되어 잡음을 발생하는 것이다. 본 논문에서는 깊이맵을 기반으로 중간시점의 영상을 보간할 때 발생하는 경계 잡음을 처리하는 방법을 제안한다. 중간영상을 합성할 때 비폐색 영역을 합성한 후 경계 잡음이 발생할 수 있는 영역을 비폐색 영역을 따라 구별한 다음, 잡음이 없는 참조 영상을 이용함으로써 경계 잡음을 처리할 수 있다. 실험 결과를 통해 배경 잡음이 사라진 자연스러운 합성영상을 생성했다.

  • PDF

A Preprocessing Algorithm for Layered Depth Image Coding (계층적 깊이영상 정보의 압축 부호화를 위한 전처리 방법)

  • 윤승욱;김성열;호요성
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.207-213
    • /
    • 2004
  • The layered depth image (LDI) is an efficient approach to represent three-dimensional objects with complex geometry for image-based rendering (IBR). LDI contains several attribute values together with multiple layers at each pixel location. In this paper, we propose an efficient preprocessing algorithm to compress depth information of LDI. Considering each depth value as a point in the two-dimensional space, we compute the minimum distance between a straight line passing through the previous two values and the current depth value. Finally, the minimum distance replaces the current attribute value. The proposed algorithm reduces the variance of the depth information , therefore, It Improves the transform and coding efficiency.

3D-HEVC Deblocking filter for Depth Video Coding (3D-HEVC 디블록킹 필터를 이용한 깊이 비디오 부호화)

  • Song, Yunseok;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.464-465
    • /
    • 2015
  • 본 논문은 HEVC(High Efficiency Video Coding) 기반의 3차원 비디오 부호기에서 깊이 비디오 부호화의 효율 증대를 위한 디블록킹 필터(deblocking filter)를 제안한다. 디블록킹 필터는 블록 왜곡(blocking artifact)을 보정하기 위한 필터인데 원래 색상 영상의 특성에 맞게 설계되어서 비슷한 목적을 지닌 SAO(Sample Adaptive Offset)와 더불어 기존 방법의 깊이 비디오 부호화에서는 사용되지 않는다. 제안 방법은 디블록킹 필터의 사전 실험 통계에 기반하여 기여도가 낮은 normal 필터를 제외시킨다. 또한, 깊이 비디오의 특성을 고려하여 임펄스 응답(impulse response)를 변형하였다. 이 변형된 디블록킹 필터를 깊이 비디오 부호화에만 적용하고 색상 비디오 부호화에는 기존 디블록킹 필터를 사용하였다. 3D-HTM(HEVC Test Model) 13.0 참조 소프트웨어에 구현하여 실험한 결과, 기존 방법에 비해 깊이 비디오 부호화 성능이 5.2% 향상되었다. 색상-깊이 비디오 간 참조가 있기 때문에 변형된 깊이 비디오 부호화가 색상 비디오 부호화 효율에 영향을 끼칠 수도 있지만 실험 결과 색상 비디오 부호화 성능은 유지되었다. 따라서 제안 방법은 성공적으로 깊이 비디오 부호화의 효율을 증대시켰다.

  • PDF

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Depth location extraction and three-dimensional image recognition by use of holographic information of an object (홀로그램 정보를 이용한 깊이위치 추출과 3차원 영상인식)

  • 김태근
    • Korean Journal of Optics and Photonics
    • /
    • v.14 no.1
    • /
    • pp.51-57
    • /
    • 2003
  • The hologram of an object contains the information of the object's depth distribution as well as the depth location of the object. However these pieces of information are blended together as a form of fringe pattern. This makes it hard to extract the depth location of the object directly from the hologram. In this paper, I propose a numerical method which separates the depth location information from the single-sideband hologram by gaussian low-pass filtering. The depth location of the object is extracted by numerical analysis of the filtered hologram. The hologram at the object's depth location is recovered by the extracted depth location.