• Title/Summary/Keyword: depth map rendering

Search Result 61, Processing Time 0.022 seconds

Wider Depth Dynamic Range Using Occupancy Map Correction for Immersive Video Coding (몰입형 비디오 부호화를 위한 점유맵 보정을 사용한 깊이의 동적 범위 확장)

  • Lim, Sung-Gyun;Hwang, Hyeon-Jong;Oh, Kwan-Jung;Jeong, Jun Young;Lee, Gwangsoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1213-1215
    • /
    • 2022
  • 몰입형 비디오 부호화를 위한 MIV(MPEG Immersive Video) 표준은 제한된 3D 공간의 다양한 위치의 뷰(view)들을 효율적으로 압축하여 사용자에게 임의의 위치 및 방향에 대한 6 자유도(6DoF)의 몰입감을 제공한다. MIV 의 참조 소프트웨어인 TMIV(Test Model for Immersive Video)에서는 복수의 뷰 간 중복되는 영역을 제거하여 전송할 화소수를 줄이기 때문에 복호화기에서 렌더링(rendering)을 위해서 각 화소의 점유(occupancy) 정보도 전송되어야 한다. TMIV 는 점유맵을 깊이(depth) 아틀라스(atlas)에 포함하여 압축 전송하고, 부호화 오류로 인한 점유 정보 손실을 방지하기 위해 깊이값 표현을 위한 동적 범위의 일부를 보호대역(guard band)으로 할당한다. 이 보호대역을 줄여서 더 넓은 깊이값의 동적 범위를 사용하면 렌더링 화질을 개선시킬 수 있다. 따라서, 본 논문에서는 현재 TMIV 의 점유 정보 오류 분석을 바탕으로 이를 보정하는 기법을 제시하고, 깊이 동적 범위 확장에 따른 부호화 성능을 분석한다. 제안기법은 기존의 TMIV 와 비교하여 평균 1.3%의 BD-rate 성능 향상을 보여준다.

  • PDF

A Depth-based Disocclusion Filling Method for Virtual Viewpoint Image Synthesis (가상 시점 영상 합성을 위한 깊이 기반 가려짐 영역 메움법)

  • Ahn, Il-Koo;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.48-60
    • /
    • 2011
  • Nowadays, the 3D community is actively researching on 3D imaging and free-viewpoint video (FVV). The free-viewpoint rendering in multi-view video, virtually move through the scenes in order to create different viewpoints, has become a popular topic in 3D research that can lead to various applications. However, there are restrictions of cost-effectiveness and occupying large bandwidth in video transmission. An alternative to solve this problem is to generate virtual views using a single texture image and a corresponding depth image. A critical issue on generating virtual views is that the regions occluded by the foreground (FG) objects in the original views may become visible in the synthesized views. Filling this disocclusions (holes) in a visually plausible manner determines the quality of synthesis results. In this paper, a new approach for handling disocclusions using depth based inpainting algorithm in synthesized views is presented. Patch based non-parametric texture synthesis which shows excellent performance has two critical elements: determining where to fill first and determining what patch to be copied. In this work, a noise-robust filling priority using the structure tensor of Hessian matrix is proposed. Moreover, a patch matching algorithm excluding foreground region using depth map and considering epipolar line is proposed. Superiority of the proposed method over the existing methods is proved by comparing the experimental results.

Robust and Blind Watermarking for DIBR Using a Depth Variation Map (깊이변화지도를 이용한 DIBR 공격의 강인성 블라인드 워터마킹)

  • Lee, Yong-Seok;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.845-860
    • /
    • 2016
  • This paper proposes a digital watermarking scheme to protect the ownership of the freeview 2D or 3D image such that the viewer watches the image(s) by rendering a arbitrary viewpoint image(s) with the received texture image and its depth image. In this case a viewpoint change attack essentially occurs, even if it is not malicious. In addition some malicious attacks should be considered, which is to remove the embedded watermark information. In this paper, we generate a depth variation map (DVM) to find the locations less sensitive to the viewpoint change. For each LH subband after 3-level 2DDWT for the texture image, the watermarking locations are found by referring the DVM. The method to embed a watermark bit to a pixel uses a linear quantizer whose quantization step is determined according to the energy of the subband. To extract the watermark information, all the possible candidates are first extracted from the attacked image by considering the correlation to the original watermark information. For each bit position, the final extracted bit is determined by a statistical treatment with all the candidates corresponding that position. The proposed method is experimented with various test images for the various attacks and compared to the previous methods to show that the proposed one has excellent performance.

Panoramic Navigation using Orthogonal Cross Cylinder Mapping and Image-Segmentation Based Environment Modeling (직각 교차 실린더 매핑과 영상 분할 기반 환경 모델링을 이용한 파노라마 네비게이션)

  • 류승택;조청운;윤경현
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.3_4
    • /
    • pp.138-148
    • /
    • 2003
  • Orthogonal Cross Cylinder mapping and segmentation based modeling methods have been implemented for constructing the image-based navigation system in this paper. The Orthogonal Cross Cylinder (OCC) is the object expressed by the intersection area that occurs when a cylinder is orthogonal with another. OCC mapping method eliminates the singularity effect caused in the environment maps and shows an almost even amount of area for the environment occupied by a single texel. A full-view image from a fixed point-of-view can be obtained with OCC mapping although it becomes difficult to express another image when the point-of-view has been changed. The OCC map is segmented according to the objects that form the environment and the depth value is set by the characteristics of the classified objects for the segmentation based modeling. This method can easily be implemented on an environment map and makes the environment modeling easier through extracting the depth value by the image segmentation. An environment navigation system with a full-view can be developed with these methods.

A Shadow Mapping Technique Separating Static and Dynamic Objects in Games using Multiple Render Targets (다중 렌더 타겟을 사용하여 정적 및 동적 오브젝트를 분리한 게임용 그림자 매핑 기법)

  • Lee, Dongryul;Kim, Youngsik
    • Journal of Korea Game Society
    • /
    • v.15 no.5
    • /
    • pp.99-108
    • /
    • 2015
  • To identify the location of the object and improve the realism in 3D game, shadow mapping is widely used to compute the depth values of vertices in view of the light position. Since the depth value of the shadow map is calculated by the world coordinate, the depth values of the static object don't need to be updated. In this paper, (1) in order to improve the rendering speed, using multiple render targets the depth values of static objects stored only once are separated from those of dynamic objects stored each time. And (2) in order to improve the shadow quality in the quarter view 3D game, the position of the light is located close to dynamic objects traveled along the camera each time. The effectiveness of the proposed method is verified by the experiments according to the different static and dynamics object configuration in 3D game.

Model-Based Three-dimensional Multiview Object Implementation by OpenGL (OpenGL을 이용한 모델 기반 3차원 다시점 객체 구현)

  • Oh, Won-Sik;Kim, Dong-Uk;Kim, Hwa-Sung;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.299-309
    • /
    • 2008
  • In this paper, we propose an algorithm for object generation from model-based 3-dimensional multi-viewpoint images using OpenGL rendering. In the first step, we preprocess a depth map image in order to get a three-dimensional coordinate which is sampled as a vertex information on OpenGL and has a z-value as depth information. Next, the Delaunay Triangulation algorithm is used to construct a polygon for texture-mapping using the vertex information. Finally, by mapping a texture image on the constructed polygon, we generate a viewpoint-adaptive object by calculating 3-dimensional coordinates on OpenGL.

Model-based 3D Multiview Object Implementation by OpenGL (OpenGL을 이용한 모델기반 3D 다시점 영상 객체 구현)

  • Oh, Won-Sik;Kim, Dong-Wook;Kim, Hwa-Sung;Yoo, Ji-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.59-62
    • /
    • 2006
  • 본 논문에서는 OpenGL Rendering을 이용한 모델기반 3D 다시점 영상의 객체 구현을 위한 구성과 각 모듈에 적용되는 알고리즘에 대해 중점적으로 연구하였다. 한 장의 텍스쳐 이미지와 깊이 맵(Depth Map)을 가지고 다시점 객체를 생성하기 위해, 먼저 깊이 정보의 전처리 과정을 거친다. 전처리 된 깊이 정보는 OpenGL상에서의 일정 간격의 꼭지점(Vertex) 정보로 샘플링 된다. 샘플링 된 꼭지점 정보는 깊이 정보를 z값으로 가지는 3차원 공간 좌표상의 점이다. 이 꼭지점 정보를 기반으로 텍스쳐 맵핑 (texture mapping)을 위한 폴리곤(polygon)을 구성하기 위해 딜루이니 삼각화(Delaunay Triangulations) 알고리즘이 적용되었다. 이렇게 구성된 폴리곤 위에 텍스쳐 이미지를 맵핑하여 OpenGL의 좌표 연산을 통해 시점을 자유롭게 조정할 수 있는 객체를 만들었다. 제한된 하나의 이미지와 깊이 정보만을 가지고 좀 더 넓은 범위의 시점을 가지는 다시점 객체를 생성하기 위해, 새로운 꼭지점을 생성하여 폴리곤을 확장시켜 기존보다 더 넓은 시점을 확보할 수 있었다. 또한 렌더링된 모델의 경계 영역 부분의 깊이정보 평활화를 통해 시각적인 개선을 이룰 수 있었다.

  • PDF

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF

Consider the directional hole filling method for virtual view point synthesis (가상 시점 영상 합성을 위한 방향성 고려 홀 채움 방법)

  • Mun, Ji Hun;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.28-34
    • /
    • 2014
  • Recently the depth-image-based rendering (DIBR) method is usually used in 3D image application filed. Virtual view image is created by using a known view with associated depth map to make a virtual view point which did not taken by the camera. But, disocclusion area occur because the virtual view point is created using a depth image based image 3D warping. To remove those kind of disocclusion region, many hole filling methods are proposed until now. Constant color region searching, horizontal interpolation, horizontal extrapolation, and variational inpainting techniques are proposed as a hole filling methods. But when using those hole filling method some problem occurred. The different types of annoying artifacts are appear in texture region hole filling procedure. In this paper to solve those problem, the multi-directional extrapolation method is newly proposed for efficiency of expanded hole filling performance. The proposed method is efficient when performing hole filling which complex texture background region. Consideration of directionality for hole filling method use the hole neighbor texture pixel value when estimate the hole pixel value. We can check the proposed hole filling method can more efficiently fill the hole region which generated by virtual view synthesis result.

Screen Content Coding Analysis to Improve Coding Efficiency for Immersive Video (몰입형 비디오 압축을 위한 스크린 콘텐츠 코딩 성능 분석)

  • Lee, Soonbin;Jeong, Jong-Beom;Kim, Inae;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.911-921
    • /
    • 2020
  • Recently, MPEG-I (Immersive) has been exploring compression performance through standardization projects for immersive video. The MPEG Immersion Video (MIV) standard technology is intended to provide limited 6DoF based on depth map-based image rendering (DIBR). MIV is a model that processes the Basic View and the residual information into an Additional View, which is a collection of patches. Atlases have the unique characteristics depending on the kind of the view they are included, requiring consideration of the compression efficiency. In this paper, the performance comparison analysis of screen content coding tools such as intra block copy (IBC) is conducted, based on the pattern of various views and patches repetition. It is demonstrated that the proposed method improves coding performance around -15.74% BD-rate reduction in the MIV.