• Title/Summary/Keyword: Depth-Based-Image-Rendering

Search Result 96, Processing Time 0.026 seconds

Camera Identification of DIBR-based Stereoscopic Image using Sensor Pattern Noise (센서패턴잡음을 이용한 DIBR 기반 입체영상의 카메라 판별)

  • Lee, Jun-Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.19 no.1
    • /
    • pp.66-75
    • /
    • 2016
  • Stereoscopic image generated by depth image-based rendering(DIBR) for surveillance robot and camera is appropriate in a low bandwidth network. The image is very important data for the decision-making of a commander and thus its integrity has to be guaranteed. One of the methods used to detect manipulation is to check if the stereoscopic image is taken from the original camera. Sensor pattern noise(SPN) used widely for camera identification cannot be directly applied to a stereoscopic image due to the stereo warping in DIBR. To solve this problem, we find out a shifted object in the stereoscopic image and relocate the object to its orignal location in the center image. Then the similarity between SPNs extracted from the stereoscopic image and the original camera is measured only for the object area. Thus we can determine the source of the camera that was used.

Occlusion-based Direct Volume Rendering for Computed Tomography Image

  • Jung, Younhyun
    • Journal of Multimedia Information System
    • /
    • v.5 no.1
    • /
    • pp.35-42
    • /
    • 2018
  • Direct volume rendering (DVR) is an important 3D visualization method for medical images as it depicts the full volumetric data. However, because DVR renders the whole volume, regions of interests (ROIs) such as a tumor that are embedded within the volume maybe occluded from view. Thus, conventional 2D cross-sectional views are still widely used, while the advantages of the DVR are often neglected. In this study, we propose a new visualization algorithm where we augment the 2D slice of interest (SOI) from an image volume with volumetric information derived from the DVR of the same volume. Our occlusion-based DVR augmentation for SOI (ODAS) uses the occlusion information derived from the voxels in front of the SOI to calculate a depth parameter that controls the amount of DVR visibility which is used to provide 3D spatial cues while not impairing the visibility of the SOI. We outline the capabilities of our ODAS and through a variety of computer tomography (CT) medical image examples, compare it to a conventional fusion of the SOI and the clipped DVR.

2D/3D image Conversion Method using Simplification of Level and Reduction of Noise for Optical Flow and Information of Edge (Optical flow의 레벨 간소화 및 노이즈 제거와 에지 정보를 이용한 2D/3D 변환 기법)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.2
    • /
    • pp.827-833
    • /
    • 2012
  • In this paper, we propose an improved optical flow algorithm which reduces computational complexity as well as noise level. This algorithm reduces computational time by applying level simplification technique and removes noise by using eigenvectors of objects. Optical flow is one of the accurate algorithms used to generate depth information from two image frames using the vectors which track the motions of pixels. This technique, however, has disadvantage of taking very long computational time because of the pixel-based calculation and can cause some noise problems. The level simplifying technique is applied to reduce the computational time, and the noise is removed by applying optical flow only to the area of having eigenvector, then using the edge image to generate the depth information of background area. Three-dimensional images were created from two-dimensional images using the proposed method which generates the depth information first and then converts into three-dimensional image using the depth information and DIBR(Depth Image Based Rendering) technique. The error rate was obtained using the SSIM(Structural SIMilarity index).

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Panorama Field Rendering based on Depth Estimation (깊이 추정에 기반한 파노라마 필드 렌더링)

  • Jung, Myoungsook;Han, JungHyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.15-22
    • /
    • 2000
  • One of the main research trends in image based modeling and rendering is how to implement plenoptic function. For this purpose, this paper proposes a novel approach based on a set of randomly placed panoramas. The proposed approach, first of all, adopts a simple computer vision technique to approximate omni-directional depth information of the surrounding scene, and then corrects/interpolates panorama images to generate an output image at a vantage viewpoint. Implementation results show that the proposed approach achieves smooth navigation at an interactive rate.

  • PDF

Digital Watermarking Algorithm for Multiview Images Generated by Three-Dimensional Warping

  • Park, Scott;Kim, Bora;Kim, Dong-Wook;Seo, Youngho
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.1
    • /
    • pp.62-68
    • /
    • 2015
  • In this paper, we propose a watermarking method for protecting the ownership of three-dimensional (3D) content generated from depth and texture images. After selecting the target areas to preserve the watermark by depth-image-based rendering, the reference viewpoint image is moved right and left in the depth map until the maximum viewpoint change is obtained and the overlapped region is generated for marking space. The region is divided into four subparts and scanned. After applying discrete cosine transform, the watermarks are inserted. To extract the watermark, the viewpoint can be changed by referring to the viewpoint image and the corresponding depth image initially, before returning to the original viewpoint. The watermark embedding and extracting algorithm are based on quantization. The watermarked image is attacked by the methods of JPEG compression, blurring, sharpening, and salt-pepper noise.

GPU-based Image-space Collision Detection among Closed Objects (GPU를 이용한 이미지 공간 충돌 검사 기법)

  • Jang, Han-Young;Jeong, Taek-Sang;Han, Jung-Hyun
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.45-52
    • /
    • 2006
  • This paper presents an image-space algorithm to real-time collision detection, which is run completely by GPU. For a single object or for multiple objects with no collision, the front and back faces appear alternately along the view direction. However, such alternation is violated when objects collide. Based on these observations, the algorithm propose the depth peeling method which renders the minimal surface of objects, not whole surface, to find colliding. The Depth peeling method utilizes the state-of-the-art functionalities of GPU such as framebuffer object, vertexbuffer object, and occlusion query. Combining these functions, multi-pass rendering and context switch can be done with low overhead. Therefore proposed approach has less rendering times and rendering overhead than previous image-space collision detection. The algorithm can handle deformable objects and complex objects, and its precision is governed by the resolution of the render-target-texture. The experimental results show the feasibility of GPU-based collision detection and its performance gain in real-time applications such as 3D games.

  • PDF

Generation of an eye-contacted view using color and depth cameras (컬러와 깊이 카메라를 이용한 시점 일치 영상 생성 기법)

  • Hyun, Jee-Ho;Han, Jae-Young;Won, Jong-Pil;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.8
    • /
    • pp.1642-1652
    • /
    • 2012
  • Generally, a camera isn't located at the center of display in a tele-presence system and it causes an incorrect eye contact between speakers which reduce the realistic feeling during the conversation. To solve this incorrect eye contact problem, we newly propose an intermediate view reconstruction algorithm using both a color camera and a depth camera and applying for the depth image based rendering (DIBR) algorithm. In the proposed algorithm, an efficient hole filling method using the arithmetic mean value of neighbor pixels and an efficient boundary noise removal method by expanding the edge region of depth image are included. We show that the generated eye-contacted image has good quality through experiments.

Simultaneous Method for Depth Image Based Rendering Technique (깊이 영상 기반 렌더링을 위한 동시 처리 방법)

  • Jung, Kwang-Hee;Park, Young-Kyung;Kim, Joong-Kyu;Lee, Gwang-Soon;Lee, Hyun;Hur, Nam-Ho;Kim, Jin-Woong
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.859-860
    • /
    • 2008
  • In this paper, we present a simultaneous method for depth image based rendering. Simultaneous method can reduce high computational complexity and waste of memory required for DIBR. Experimental results show that the proposed method is suitable for generating auto-stereoscopic images.

  • PDF