• Title/Summary/Keyword: Virtual viewpoint image

Search Result 49, Processing Time 0.018 seconds

A Multi 3D Objects Augmentation System Using Rubik's Cube (루빅스 큐브를 활용한 다 종류 3차원 객체 증강 시스템)

  • Lee, Sang Jun;Kim, Soo Bin;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1224-1235
    • /
    • 2017
  • Recently, augmented reality technology has received much attention in many fields. This paper presents an augmented reality system using Rubiks' Cube which can augment various 3D objects depending on patterns of a Rubiks' cube. The system first detects a cube from an image using partitional clustering and strongly connected graph. Thereafter, the system detects the top side of the cube and finds a proper pattern to determine which object should be augmented. An object corresponding to the pattern is finally augmented according to the camera viewpoint. Experimental results show that the proposed system successfully augments various virtual objects in real time.

A Depth-based Disocclusion Filling Method for Virtual Viewpoint Image Synthesis (가상 시점 영상 합성을 위한 깊이 기반 가려짐 영역 메움법)

  • Ahn, Il-Koo;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.48-60
    • /
    • 2011
  • Nowadays, the 3D community is actively researching on 3D imaging and free-viewpoint video (FVV). The free-viewpoint rendering in multi-view video, virtually move through the scenes in order to create different viewpoints, has become a popular topic in 3D research that can lead to various applications. However, there are restrictions of cost-effectiveness and occupying large bandwidth in video transmission. An alternative to solve this problem is to generate virtual views using a single texture image and a corresponding depth image. A critical issue on generating virtual views is that the regions occluded by the foreground (FG) objects in the original views may become visible in the synthesized views. Filling this disocclusions (holes) in a visually plausible manner determines the quality of synthesis results. In this paper, a new approach for handling disocclusions using depth based inpainting algorithm in synthesized views is presented. Patch based non-parametric texture synthesis which shows excellent performance has two critical elements: determining where to fill first and determining what patch to be copied. In this work, a noise-robust filling priority using the structure tensor of Hessian matrix is proposed. Moreover, a patch matching algorithm excluding foreground region using depth map and considering epipolar line is proposed. Superiority of the proposed method over the existing methods is proved by comparing the experimental results.

Implementation of Selective Mapping Billboard for Production of Image-based 3D Virtual Reality (실사기반의 3차원 가상현실 제작을 위한 선택적 맵핑 방식의 빌보드 구현)

  • Ahn, Eun-Young;Kim, Jae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.4
    • /
    • pp.601-608
    • /
    • 2010
  • This investigation proposes a new method to overcome disadvantages of panorama VR that is oriented toward spacial information and Object VR that is oriented toward object itself and consequently to make 3D virtual reality (VR) contents efficiently by using image based approach. 3D VR contents provide satisfactory qualities to users but 3D modeling is complex and elaborative and requires high cost. So, this paper aims at reducing tremendous efforts for making 3D VR by substituting 3D modeling with 'advanced Billboard'(we call it Smart Billboard). Smart Billboard has a mechanism for selecting an adequate mapping image that is observable at each user viewpoint and carry on texture mapping into the Billboard. And it is validated with the practical embodiments of a virtual museum in which the exhibitions are prepared by Smart Billboard.

Multiple TIP Images Blending for Wide Virtual Environment (넓은 가상환경 구축을 위한 다수의 TIP (Tour into the Picture) 영상 합성)

  • Roh, Chang-Hyun;Lee, Wan-Bok;Ryu, Dae-Hyun;Kang, Jung-Jin
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.1
    • /
    • pp.61-68
    • /
    • 2005
  • Image-based rendering is an approach to generate realistic images in real-time without modeling explicit 3D geometry. Especially, owing to its simplicity, TIP(Tour Into the Picture) is preferred to constructing a 3D background scene. Because existing TIP methods have a limitation in that they lack geometrical information, we can not expect a accurate scene if the viewpoint is far from the origin of the TIP. In this paper, we propose the method of constructing a virtual environment of a wide area by blending multiple TIP images. Firstly, we construct multiple TIP models of the virtual environment. Then we interpolate foreground and background objects respectively, to generate a smooth navigation image. The method proposed here can be applied to various industry applications, such as computer game, 3D car navigation, and so on.

Real-Time Free Viewpoint TV System Using CUDA (CUDA 를 이용한 실시간 Free Viewpoint TV System 구현)

  • Yang, Yun Mo;Lee, Jin Hyeok;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.11a
    • /
    • pp.71-73
    • /
    • 2015
  • In this paper, we propose the Real-Time Free Viewpoint TV System with multiple Microsoft Kinects and CUDA of NVidia GPGPU library. It generates a virtual view between two views by using color and depth image acquired by Kinect in real time. In order to reduce complexity of coordinate transformations and nearest neighbor method for hole filling caused by IR pattern interference, we parallelize this process using CUDA. Finally, it is observed that CUDA based system generates more frames than using CPU based system in the same time.

  • PDF

Design of Free Viewpoint TV System with MS Kinects (MS Kinect 를 이용한 Free Viewpoint TV System 설계)

  • Lee, Jun Hyeop;Yang, Yun Mo;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.122-124
    • /
    • 2015
  • This paper provides the design and implementation of Free Viewpoint TV System with multiple Microsoft Kinects. It generates a virtual view between two views by manipulating texture and depth image captured by Kinects in real-time. In order to avoid this, we propose the hole-filling scheme using Nearest neighbor and inpainting. As a result, holes generated by interference are filled with new depth values calculated by their neighbors. However, the depth values are not accurate, but are similar with their neighbors. And depending on the frequency of running a Nearest Neighbor method, we can see that edge's border would be shifted inner or outer of the object.

  • PDF

Enhanced Image Mapping Method for Computer-Generated Integral Imaging System (집적 영상 시스템을 위한 향상된 이미지 매핑 방법)

  • Lee Bin-Na-Ra;Cho Yong-Joo;Park Kyoung-Shin;Min Sung-Wook
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.295-300
    • /
    • 2006
  • The integral imaging system is an auto-stereoscopic display that allows users to see 3D images without wearing special glasses. In the integral imaging system, the 3D object information is taken from several view points and stored as elemental images. Then, users can see a 3D reconstructed image by the elemental images displayed through a lens array. The elemental images can be created by computer graphics, which is referred to the computer-generated integral imaging. The process of creating the elemental images is called image mapping. There are some image mapping methods proposed in the past, such as PRR(Point Retracing Rendering), MVR(Multi-Viewpoint Rendering) and PGR(Parallel Group Rendering). However, they have problems with heavy rendering computations or performance barrier as the number of elemental lenses in the lens array increases. Thus, it is difficult to use them in real-time graphics applications, such as virtual reality or real-time, interactive games. In this paper, we propose a new image mapping method named VVR(Viewpoint Vector Rendering) that improves real-time rendering performance. This paper describes the concept of VVR first and the performance comparison of image mapping process with previous methods. Then, it discusses possible directions for the future improvements.

A Study on Ontology of Digital Photo Image Focused on a Simulacre Concept of Deleuze & Baudrillard (디지털 사진 이미지의 존재론에 관한 연구 -들뢰즈와 보드리야르의 시뮬라크르 개념을 중심으로)

  • Gwon, Oh-sang
    • Cartoon and Animation Studies
    • /
    • s.51
    • /
    • pp.391-411
    • /
    • 2018
  • The purpose of this thesis is to examine ontology of digital photo image based on a Simulacre concept of Gilles Deleuze & Jean Baudrillard. Traditionally, analog image follows the logic of reproduction with a similarity with original target. Therefore, visual reality of analog image is illuminated, interpreted, and described in a subjective viewpoint, but does not deviate from the interpreted reality. However, digital image does not exist physically but exists as information that is made of mathematical data, a digital algorithm. This digital image is that newness of every reproduction, that is, essence of subject 'once existing there' does not exist anymore, and does not instruct or reproduce an outside target. Therefore, digital image does not have the similarity and does not keep the index instruction ability anymore. It means that this digital image is converted into a virtual area, and this is not reproduction of already existing but display of not existing yet. This not-being of digital image changes understanding of reality, existence, and imagination. Now, dividing it into reality and imagination itself is meaningless, and this does not make digital image with technical improvement but is a new image that is basically completely different from existing image. Eventually, digital image of the day passes step to visualize an existent target, nonexistent things have been visualized, and reality operates virtually. It means that digital image does not reproduce our reality but reproduces other reality realistically. In other words, it is a virtual reproduction producing an image that is not related to a target, that is to say Simulacre. In the virtually simulated world, reality has an infinite possibility, and it is not a picture of the past and present and has a possibility as the infinite virtual that is not fixed, is infinitely mutable, and is not actualized yet.

Fast Multi-View Synthesis Using Duplex Foward Mapping and Parallel Processing (순차적 이중 전방 사상의 병렬 처리를 통한 다중 시점 고속 영상 합성)

  • Choi, Ji-Youn;Ryu, Sae-Woon;Shin, Hong-Chang;Park, Jong-Il
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11B
    • /
    • pp.1303-1310
    • /
    • 2009
  • Glassless 3D display requires multiple images taken from different viewpoints to show a scene. The simplest way to get multi-view image is using multiple camera that as number of views are requires. To do that, synchronize between cameras or compute and transmit lots of data comes critical problem. Thus, generating such a large number of viewpoint images effectively is emerging as a key technique in 3D video technology. Image-based view synthesis is an algorithm for generating various virtual viewpoint images using a limited number of views and depth maps. In this paper, because the virtual view image can be express as a transformed image from real view with some depth condition, we propose an algorithm to compute multi-view synthesis from two reference view images and their own depth-map by stepwise duplex forward mapping. And also, because the geometrical relationship between real view and virtual view is repetitively, we apply our algorithm into OpenGL Shading Language which is a programmable Graphic Process Unit that allow parallel processing to improve computation time. We demonstrate the effectiveness of our algorithm for fast view synthesis through a variety of experiments with real data.

VR & Changes in Cinematic Storytelling - Focusing on film composition unit, montage, space, mise-en-scène and perspective - (VR과 영화 스토리텔링의 변화 - 영화 구성단위, 몽타주, 공간성, 미장센, 시점을 중심으로 -)

  • Jeon, Byoungwon;Cha, Minchol
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.991-1001
    • /
    • 2018
  • In the context of the 4th Industrial Revolution, IoT, Big Data, and VR are rapidly emerging as core sectors of future industries. In particular, the VR has been under the limelight as a new media content appealing to new generation. And the VR user is not merely a 'spectator', but the 'actor'. In other words, the newness of VR is not in the 'more likely representation of the virtual reality', but in the 'making it act more virtual (more technically, 'interactive')' in the virtual world. In this paper, we examine the VR cinema in terms of film composition unit, montage, cinematic space, mise-en-$sc{\grave{e}}ne$ and perspective. The VR cinema, which is in the early stage of evolution, is basically based on $360^{\circ}$ image that strengthens the autonomy of the audience's point of view, but other factors like haptic or sonic immersion are becoming increasingly important. In addition, the VR cinema will be combined with AR, MR, SR, and Interactive technologies, and will expand its horizon as it is produced in various forms. Therefore, it is expected that more detailed viewpoint will be applied in the subsequent study on VR cinema.