• Title/Summary/Keyword: depth-image

Search Result 1,844, Processing Time 0.025 seconds

Accelerating Depth Image-Based Rendering Using GPU (GPU를 이용한 깊이 영상기반 렌더링의 가속)

  • Lee, Man-Hee;Park, In-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.11
    • /
    • pp.853-858
    • /
    • 2006
  • In this paper, we propose a practical method for hardware-accelerated rendering of the depth image-based representation(DIBR) of 3D graphic object using graphic processing unit(GPU). The proposed method overcomes the drawbacks of the conventional rendering, i.e. it is slow since it is hardly assisted by graphics hardware and surface lighting is static. Utilizing the new features of modem GPU and programmable shader support, we develop an efficient hardware-accelerating rendering algorithm of depth image-based 3D object. Surface rendering in response of varying illumination is performed inside the vertex shader while adaptive point splatting is performed inside the fragment shader. Experimental results show that the rendering speed increases considerably compared with the software-based rendering and the conventional OpenGL-based rendering method.

A Study on the Generation and Processing of Depth Map for Multi-resolution Image Using Belief Propagation Algorithm (신뢰확산 알고리즘을 이용한 다해상도 영상에서 깊이영상의 생성과 처리에 관한 연구)

  • Jee, Innho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.201-208
    • /
    • 2015
  • 3D image must have depth image for depth information in order for 3D realistic media broadcasting. We used generally belief propagation algorithm to solve probability model. Belief propagation algorithm is operated by message passing between nodes corresponding to each pixel. The high resolution image will be able to precisely represent but that required much computational complexity for 3D representation. We proposed fast stereo matching algorithm using belief propagation with multi-resolution based wavelet or lifting. This method can be shown efficiently computational time at much iterations for accurate disparity map.

Restoration of underwater images using depth and transmission map estimation, with attenuation priors

  • Jarina, Raihan A.;Abas, P.G. Emeroylariffion;De Silva, Liyanage C.
    • Ocean Systems Engineering
    • /
    • v.11 no.4
    • /
    • pp.331-351
    • /
    • 2021
  • Underwater images are very much different from images taken on land, due to the presence of a higher disturbance ratio caused by the presence of water medium between the camera and the target object. These distortions and noises result in unclear details and reduced quality of the output image. An underwater image restoration method is proposed in this paper, which uses blurriness information, background light neutralization information, and red-light intensity to estimate depth. The transmission map is then estimated using the derived depth map, by considering separate attenuation coefficients for direct and backscattered signals. The estimated transmission map and estimated background light are then used to recover the scene radiance. Qualitative and quantitative analysis have been used to compare the performance of the proposed method against other state-of-the-art restoration methods. It has been shown that the proposed method can yield good quality restored underwater images. The proposed method has also been evaluated using different qualitative metrics, and results have shown that method is highly capable of restoring underwater images with different conditions. The results are significant and show the applicability of the proposed method for underwater image restoration work.

Virtual Viewpoint Image Synthesis Algorithm using Multi-view Geometry (다시점 카메라 모델의 기하학적 특성을 이용한 가상시점 영상 생성 기법)

  • Kim, Tae-June;Chang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1154-1166
    • /
    • 2009
  • In this paper, we propose algorithms for generating high quality virtual intermediate views on the baseline or out of baseline. In this proposed algorithm, depth information as well as 3D warping technique is used to generate the virtual views. The coordinate of real 3D image is calculated from the depth information and geometrical characteristics of camera and the calculated 3D coordinate is projected to the 2D plane at arbitrary camera position and results in 2D virtual view image. Through the experiments, we could show that the generated virtual view image on the baseline by the proposed algorithm has better PSNR at least by 0.5dB and we also could cover the occluded regions more efficiently for the generated virtual view image out of baseline by the proposed algorithm.

Real-Time Virtual-View Image Synthesis Algorithm Using Kinect Camera (키넥트 카메라를 이용한 실시간 가상 시점 영상 생성 기법)

  • Lee, Gyu-Cheol;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.409-419
    • /
    • 2013
  • Kinect released by Microsoft in November 2010 is a motion sensing camera in xbox360 and gives depth and color images. However, Kinect camera also generates holes and noise around object boundaries in the obtained images because it uses infrared pattern. Also, boundary flickering phenomenon occurs. Therefore, we propose a real-time virtual-view video synthesis algorithm which results in a high-quality virtual view by solving these problems. In the proposed algorithm, holes around the boundary are filled by using the joint bilateral filter. Color image is converted into intensity image and then flickering pixels are searched by analyzing the variation of intensity and depth images. Finally, boundary flickering phenomenon can be reduced by converting values of flickering pixels into the maximum pixel value of a previous depth image and virtual views are generated by applying 3D warping technique. Holes existing on regions that are not part of occlusion region are also filled with a center pixel value of the highest reliability block after the final block reliability is calculated by using a block based gradient searching algorithm with block reliability. The experimental results show that the proposed algorithm generated the virtual view image in real-time.

Estimating Directly Damage on External Surface of Container from Parameters of Capsize-Gaussian-Function

  • Son TRAN Ngoc Hoang;KIM Hwan-Seong
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2005.10a
    • /
    • pp.297-302
    • /
    • 2005
  • In this paper, an estimating damage on external surface of container using Capsize-Gaussian-Function (be called CGF) is presented. The estimation of the damage size can be get directly from two parameters of CGF, these are the depth and the flexure, also the direction of damage. The performance of the present method has been illustrated using an image of damage container, which had been taken from Hanjin Busan Port, after using image processing techniques to do preprocessing of the image, especially, the main used technique is Canny edge detecting that is widely used in computer vision to locate sharp intensity and to find object boundaries in the image, then correlation between the edge image from the preprocessing step and the CGF with three parameters (direction, depth, flexure), as a result, we get an image that perform damage information, and these parameters is an estimator directly to the damage.

  • PDF

Single-Image Dehazing based on Scene Brightness for Perspective Preservation

  • Young-Su Chung;Nam-Ho Kim
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.1
    • /
    • pp.70-79
    • /
    • 2024
  • Bad weather conditions such as haze lead to a significant lack of visibility in images, which can affect the functioning and reliability of image processing systems. Accordingly, various single-image dehazing (SID) methods have recently been proposed. Existing SID methods have introduced effective visibility improvement algorithms, but they do not reflect the image's perspective, and thus have limitations that distort the sky area and nearby objects. This study proposes a new SID method that reflects the sense of space by defining the correlation between image brightness and haze. The proposed method defines the haze intensity by calculating the airlight brightness deviation and sets the weight factor of the depth map by classifying images based on the defined haze intensity into images with a large sense of space, images with high intensity, and general images. Consequently, it emphasizes the contrast of nearby images where haze is present and naturally smooths the sky region to preserve the image's perspective.

Depth estimation by using a double conic projection (이중원뿔 투영을 이용한 거리의 추정)

  • 김완수;조형석;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1411-1414
    • /
    • 1997
  • It is essential to obtain a distane informaion in order to completely execute assembly tasks such as a grasping and an insertion. In this paper, we propose a method estimating a measurement distance from a sensor to an object through using the omni-directional image sensing system for assembly(OISSA) and show its features and feasibility by a computer simulation. The method, utilizing a forwarded motion stereo technique, is simple to search the corresponding points and possible to immediatiely obtain a three-dimensional 2.pi.-shape information.

  • PDF

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.

Improvement of 3D Stereoscopic Perception Using Depth Map Transformation (깊이맵 변환을 이용한 3D 입체감 개선 방법)

  • Jang, Seong-Eun;Jung, Da-Un;Seo, Joo-Ha;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.916-926
    • /
    • 2011
  • It is well known that high-resolution 3D movie contents frequently do not deliver the identical 3D perception to low-resolution 3D images. For solving this problem, we propose a novel method that produces a new stereoscopic image based on depth map transformation using the spatial complexity of an image. After analyzing the depth map histogram, the depth map is decomposed into multiple depth planes that are transformed based upon the spatial complexity. The transformed depth planes are composited into a new depth map. Experimental results demonstrate that the lower the spatial complexity is, the higher the perceived video quality and depth perception are. As well, visual fatigue test showed that the stereoscopic images deliver less visual fatigue.