• 제목/요약/키워드: a depth

검색결과 22,081건 처리시간 0.04초

AdaMM-DepthNet: Unsupervised Adaptive Depth Estimation Guided by Min and Max Depth Priors for Monocular Images

  • ;김문철
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2020년도 추계학술대회
    • /
    • pp.252-255
    • /
    • 2020
  • Unsupervised deep learning methods have shown impressive results for the challenging monocular depth estimation task, a field of study that has gained attention in recent years. A common approach for this task is to train a deep convolutional neural network (DCNN) via an image synthesis sub-task, where additional views are utilized during training to minimize a photometric reconstruction error. Previous unsupervised depth estimation networks are trained within a fixed depth estimation range, irrespective of its possible range for a given image, leading to suboptimal estimates. To overcome this suboptimal limitation, we first propose an unsupervised adaptive depth estimation method guided by minimum and maximum (min-max) depth priors for a given input image. The incorporation of min-max depth priors can drastically reduce the depth estimation complexity and produce depth estimates with higher accuracy. Moreover, we propose a novel network architecture for adaptive depth estimation, called the AdaMM-DepthNet, which adopts the min-max depth estimation in its front side. Intensive experimental results demonstrate that the adaptive depth estimation can significantly boost up the accuracy with a fewer number of parameters over the conventional approaches with a fixed minimum and maximum depth range.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권3호
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Development of Wear Model concerning the Depth Behaviour

  • 김형규;이영호
    • KSTLE International Journal
    • /
    • 제6권1호
    • /
    • pp.1-7
    • /
    • 2005
  • Wear model for predicting the vehaviour of a depth is considered in this paper. It is deduced from the energy and volume based wear models such as the Archard equation and the workrate model. A new parameter of the equivalent depth ($D_e$= wear volume /worn area) is considered for the wear model of a depth prediction. A concenpt of a dissipated shear energy density is accommodated for in the suggested models. It is found that $D_e$ can distinguish the worn area shape. A cubic of $D_e$($D_e^3$) gives a better linear regression with the volume than that of the maximmum depth $D_{max}e$($D_{max}^3$) does. Both $D_{max}$ and $D_e$ are used for the presently suggested depth-based wear model. As a result, a wear depth profile can be simulated by a model using $D_{max}$. Wear resistance from the concern of an overall depth can be identified by the wear coefficient of the model using $D_e$.

깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법 (Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images)

  • 엄기문;안충현;이수인;김강연;이관행
    • 방송공학회논문지
    • /
    • 제9권3호
    • /
    • pp.185-195
    • /
    • 2004
  • 본 논문에서는 정확한 3차원 장면복원을 위한 다중깊이맵 융합기법을 제안한다. 제안한 기법은 수동적 3차원 정보획득 방법인 스테레오 정합기법과 능동적 3차원 정보획득 방법인 깊이정보 카메라로부터 얻어진 다중깊이맵을 융합한다. 전통적인 두 개의 스테레오 영상 간에 변이정보를 추정하는 전통적 스테레오 정합기법은 차폐 영역과 텍스쳐가 적은 영역에서 변이 오차를 많이 발생한다. 또한 깊이정보 카메라를 이용한 깊이맵은 비교적 정확한 깊이정보를 얻을 수 있으나, 잡음이 많이 포함되며, 측정 가능한 깊이의 범위가 제한되어 있다. 따라서 본 논문에서는 이러한 두 기법의 단점을 극복하고, 상호 보완하기 위하여 이 두 기법에 의해 얻어진다. 중깊이맵의 변이 또는 깊이값을 적절하게 선택하기 위한 깊이맵 융합기법을 제안한다. 3-시점 영상으로부터 가운데 시점을 기준으로 좌우 영상에 대해 두 개의 변이맵들을 각각 얻으며, 가운데 시점 카메라에 설치된 깊이정보 카메라로부터 얻어진 깊이맵들 간에 위치와 깊이값을 일치시키기 위한 전처리를 행한 다음. 각 화소 위치의 텍스쳐 정보, 깊이맵 분포 등에 기반하여 적절한 깊이값을 선택한다. 제안한 기법의 컴퓨터 모의실험 결과. 일부 배경 영역에서 깊이맵의 정확도가 개선됨을 볼 수 있었다.

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제5권11호
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.

Scalable Coding of Depth Images with Synthesis-Guided Edge Detection

  • Zhao, Lijun;Wang, Anhong;Zeng, Bing;Jin, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권10호
    • /
    • pp.4108-4125
    • /
    • 2015
  • This paper presents a scalable coding method for depth images by considering the quality of synthesized images in virtual views. First, we design a new edge detection algorithm that is based on calculating the depth difference between two neighboring pixels within the depth map. By choosing different thresholds, this algorithm generates a scalable bit stream that puts larger depth differences in front, followed by smaller depth differences. A scalable scheme is also designed for coding depth pixels through a layered sampling structure. At the receiver side, the full-resolution depth image is reconstructed from the received bits by solving a partial-differential-equation (PDE). Experimental results show that the proposed method improves the rate-distortion performance of synthesized images at virtual views and achieves better visual quality.

비지도학습 기반의 뎁스 추정을 위한 지식 증류 기법 (Knowledge Distillation for Unsupervised Depth Estimation)

  • 송지민;이상준
    • 대한임베디드공학회논문지
    • /
    • 제17권4호
    • /
    • pp.209-215
    • /
    • 2022
  • This paper proposes a novel approach for training an unsupervised depth estimation algorithm. The objective of unsupervised depth estimation is to estimate pixel-wise distances from camera without external supervision. While most previous works focus on model architectures, loss functions, and masking methods for considering dynamic objects, this paper focuses on the training framework to effectively use depth cue. The main loss function of unsupervised depth estimation algorithms is known as the photometric error. In this paper, we claim that direct depth cue is more effective than the photometric error. To obtain the direct depth cue, we adopt the technique of knowledge distillation which is a teacher-student learning framework. We train a teacher network based on a previous unsupervised method, and its depth predictions are utilized as pseudo labels. The pseudo labels are employed to train a student network. In experiments, our proposed algorithm shows a comparable performance with the state-of-the-art algorithm, and we demonstrate that our teacher-student framework is effective in the problem of unsupervised depth estimation.

상호 구조에 기반한 초점으로부터의 깊이 측정 방법 개선 (Enhancing Depth Measurements in Depth From Focus based on Mutual Structures)

  • 무하마드 타릭 마흐무드;최영규
    • 반도체디스플레이기술학회지
    • /
    • 제21권3호
    • /
    • pp.17-21
    • /
    • 2022
  • A variety of techniques have been proposed in the literature for depth improvement in depth from focus method. Unfortunately, these techniques over-smooth the depth maps over the regions of depth discontinuities. In this paper, we propose a robust technique for improving the depth map by employing a nonconvex smoothness function that preserves the depth edges. In addition, the proposed technique exploits the mutual structures between the depth map and a guidance map. This guidance map is designed by taking the mean of image intensities in the image sequence. The depth map is updated iteratively till the nonconvex objective function converges. Experiments performed on real complex image sequences revealed the effectiveness of the proposed technique.

Enhancing Depth Accuracy on the Region of Interest in a Scene for Depth Image Based Rendering

  • Cho, Yongjoo;Seo, Kiyoung;Park, Kyoung Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권7호
    • /
    • pp.2434-2448
    • /
    • 2014
  • This research proposed the domain division depth map quantization for multiview intermediate image generation using Depth Image-Based Rendering (DIBR). This technique used per-pixel depth quantization according to the percentage of depth bits assigned in domains of depth range. A comparative experiment was conducted to investigate the potential benefits of the proposed method against the linear depth quantization on DIBR multiview intermediate image generation. The experiment evaluated three quantization methods with computer-generated 3D scenes, which consisted of various scene complexities and backgrounds, under varying the depth resolution. The results showed that the proposed domain division depth quantization method outperformed the linear method on the 7- bit or lower depth map, especially in the scene with the large object.

Touch Pen Using Depth Information

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • 한국멀티미디어학회논문지
    • /
    • 제18권11호
    • /
    • pp.1313-1318
    • /
    • 2015
  • Current touch pen requires the special equipments to detect a touch and its price increases in proportion to the screen size. In this paper, we propose a method for detecting a touch and implementing a pen using the depth information. The proposed method obtains a background depth image using a depth camera and extracts an object by comparing a captured depth image with the background depth image. Also, we determine a touch if the depth value of the object is the same as the background and then provide the pen event. Using this method, we can implement a cheaper and more convenient touch pen.