• Title/Summary/Keyword: GPU-based rendering

Search Result 88, Processing Time 0.023 seconds

MPEG-I RVS Software Speed-up for Real-time Application (실시간 렌더링을 위한 MPEG-I RVS 가속화 기법)

  • Ahn, Heejune;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.655-664
    • /
    • 2020
  • Free viewpoint image synthesis technology is one of the important technologies in the MPEG-I (Immersive) standard. RVS (Reference View Synthesizer) developed by MPEG-I and in use in MPEG group is a DIBR (Depth Information-Based Rendering) program that generates an image at a virtual (intermediate) viewpoint from multiple viewpoints' inputs. RVS uses the mesh surface method based on computer graphics, and outperforms the pixel-based ones by 2.5dB or more compared to the previous pixel method. Even though its OpenGL version provides 10 times speed up over the non OpenGL based one, it still shows a non-real-time processing speed, i.e., 0.75 fps on the two 2k resolution input images. In this paper, we analyze the internal of RVS implementation and modify its structure, achieving 34 times speed up, therefore, real-time performance (22-26 fps), through the 3 key improvements: 1) the reuse of OpenGL buffers and texture objects 2) the parallelization of file I/O and OpenGL execution 3) the parallelization of GPU shader program and buffer transfer.

Synthesis of Ocean Wave Models and Simulation Using GPU (바다물결 모형의 합성 및 GPU를 이용한 시뮬레이션)

  • Lee, Dong-Min;Lee, Sung-Kee
    • The KIPS Transactions:PartA
    • /
    • v.14A no.7
    • /
    • pp.421-434
    • /
    • 2007
  • Among many other CG generated natural scenes, the representation of ocean surfaces is one of the most complicated and time-consuming problem because of its large extent and complex surface movement. We present a hybrid method to represent and animate unbound deep-water ocean surfaces by utilizing graphics processor as both simulation and rendering core. Our technique is mainly based on spectral approaches that generate a high-detailed height field using Fourier transform on a 2D regular grid. Additionally, we incorporate Gerstner model and generate low-detailed height field on a 2D projected grid in order to represent large waves and main structure of ocean surface. There is no interruption between CPU and GPU, and no need to transfer simulation results from the system memory to graphics hardware because the entire simulation and rending processes are done on graphics processor. As a result we can synthesize and render realistic water surfaces in real-time. Proposed techniques are readily adoptable to real-time applications such as computer games that have heavy work load on CPU but still demand plausible natural scenes.

Physically Inspired Fast Lightning Rendering (물리적 특성을 고려한 빠른 번개 렌더링)

  • Yun, Jeongsu;Yoon, Sung-Eui
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.53-61
    • /
    • 2016
  • In this paper, we propose an algorithm for generating lightning paths, which are more realistic than those of random tree based algorithm and faster than a physically based simulation algorithm. Our approach utilizes physically based Dielectric Breakdown Method (DBM) and approximates the electric potential field dramatically to generate the lightning path. We also show a guide path method for the lightning to avoid obstacles in a complex scene. Finally, our method renders fast and realistic lightning by considering physical characteristics for the thickness and brightness of the lightning stream. Our result of the lightning path shares similarity to natural phenomenon by having about 1.56 fractal dimensions, and we can generate the lightning path faster than a previous physically based algorithm. On the other hand, our method is difficult to apply on the real-time games yet, but our approach can be improved by performing the path generation algorithm with GPU in future.

Color2Gray using Conventional Approaches in Black-and-White Photography (전통적 사진 기법에 기반한 컬러 영상의 흑백 변환)

  • Jang, Hyuk-Su;Choi, Min-Gyu
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.3
    • /
    • pp.1-9
    • /
    • 2008
  • This paper presents a novel optimization-based saliency-preserving method for converting color images to grayscale in a manner consistent with conventional approaches of black-and-white photographers. In black-and-white photography, a colored filter called a contrast filter has been commonly employed on a camera to lighten or darken selected colors. In addition, local exposure controls such as dodging and burning techniques are typically employed in the darkroom process to change the exposure of local areas within the print without affecting the overall exposure. Our method seeks a digital version of a conventional contrast filter to preserve visually-important image features. Furthermore, conventional burning and dodging techniques are addressed, together with image similarity weights, to give edge-aware local exposure control over the image space. Our method can be efficiently optimized on GPU. According to the experiments, CUDA implementation enables 1 megapixel color images to be converted to grayscale at interactive frames rates.

  • PDF

Real-Time Indirect Illumination using a Light Quad-Tree (광원 트리를 사용한 간접 조명의 실시간 렌더링)

  • Ki, Hyun-Woo;Oh, Kyoung-Su
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.4
    • /
    • pp.158-167
    • /
    • 2007
  • Indirect illumination plays an important role for realistic image synthesis. We present a novel realtime indirect illumination rendering technique using image pyramids. Hundreds of thousands of indirect point light sources are stored into images, and then they hierarchically clustered into quad-tree image pyramids. We also introduce a GPU based top-down and breadth-first traversal of the quad-trees to approximate the illumination with clusters (set of lights). All steps entirely run on the GPU in real-time. Result images demonstrate that our method represents diffuse interreflection, especially a color bleeding effect well. We achieved interactive frame rates of tens to hundreads, without any preprocessing. We can avoid artifacts caused by sampling, and our method is seven times faster than a recently proposed sampling based method.

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

View synthesis with sparse light field for 6DoF immersive video

  • Kwak, Sangwoon;Yun, Joungil;Jeong, Jun-Young;Kim, Youngwook;Ihm, Insung;Cheong, Won-Sik;Seo, Jeongil
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.24-37
    • /
    • 2022
  • Virtual view synthesis, which generates novel views similar to the characteristics of actually acquired images, is an essential technical component for delivering an immersive video with realistic binocular disparity and smooth motion parallax. This is typically achieved in sequence by warping the given images to the designated viewing position, blending warped images, and filling the remaining holes. When considering 6DoF use cases with huge motion, the warping method in patch unit is more preferable than other conventional methods running in pixel unit. Regarding the prior case, the quality of synthesized image is highly relevant to the means of blending. Based on such aspect, we proposed a novel blending architecture that exploits the similarity of the directions of rays and the distribution of depth values. By further employing the proposed method, results showed that more enhanced view was synthesized compared with the well-designed synthesizers used within moving picture expert group (MPEG-I). Moreover, we explained the GPU-based implementation synthesizing and rendering views in the level of real time by considering the applicability for immersive video service.

A Method for Client-Server Allocation for Maximum Load Balancing and Automatic Frame Rate Adjustment in a Game Streaming Environment (게임 스트리밍 환경에서 최대 부하 균등 및 자동 프레임 레이트 조절을 위한 클라이언트-서버 배정 방법)

  • Kim, Sangchul
    • Journal of Korea Game Society
    • /
    • v.20 no.4
    • /
    • pp.77-88
    • /
    • 2020
  • Recently, interest in game streaming is high in cloud-based gaming. In game streaming, remote game servers perform graphics rendering and stream the resulting scene images to clients' device on the Internet. We model the client-server allocation (CSA) problem for balancing the GPU load between servers in a game streaming environment as an optimization problem, and propose a simulated annealing-based method. The features of our method are that the method takes into account the constraints on network delay and has the ability to automatically adjust the frame rate of game sessions if necessary.

Real-Time 3D Volume Deformation and Visualization by Integrating NeRF, PBD, and Parallel Resampling (NeRF, PBD 및 병렬 리샘플링을 결합한 실시간 3D 볼륨 변형체 시각화)

  • Sangmin Kwon;Sojin Jeon;Juni Park;Dasol Kim;Heewon Kye
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.189-198
    • /
    • 2024
  • Research combining deep learning-based models and physical simulations is making important advances in the medical field. This extracts the necessary information from medical image data and enables fast and accurate prediction of deformation of the skeleton and soft tissue based on physical laws. This study proposes a system that integrates Neural Radiance Fields (NeRF), Position-Based Dynamics (PBD), and Parallel Resampling to generate 3D volume data, and deform and visualize them in real-time. NeRF uses 2D images and camera coordinates to produce high-resolution 3D volume data, while PBD enables real-time deformation and interaction through physics-based simulation. Parallel Resampling improves rendering efficiency by dividing the volume into tetrahedral meshes and utilizing GPU parallel processing. This system renders the deformed volume data using ray casting, leveraging GPU parallel processing for fast real-time visualization. Experimental results show that this system can generate and deform 3D data without expensive equipment, demonstrating potential applications in engineering, education, and medicine.

Large-Scale Ultrasound Volume Rendering using Bricking (블리킹을 이용한 대용량 초음파 볼륨 데이터 렌더링)

  • Kim, Ju-Hwan;Kwon, Koo-Joo;Shin, Byeong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.117-126
    • /
    • 2008
  • Recent advances in medical imaging technologies have enabled the high-resolution data acquisition. Therefore visualization of such large data set on standard graphics hardware became a popular research theme. Among many visualization techniques, we focused on bricking method which divided the entire volume into smaller bricks and rendered them in order. Since it switches bet\W8n bricks on main memory and bricks on GPU memory on the fly, to achieve better performance, the number of these memory swapping conditions has to be minimized. And, because the original bricking algorithm was designed for regular volume data such as CT and MR, when applying the algorithm to ultrasound volume data which is based on the toroidal coordinate space, it revealed some performance degradation. In some areas near bricks' boundaries, an orthogonal viewing ray intersects the single brick twice, and it consequently makes a single brick memory to be uploaded onto GPU twice in a single frame. To avoid this redundancy, we divided the volume into bricks allowing overlapping between the bricks. In this paper, we suggest the formula to determine an appropriate size of these shared area between the bricks. Using our formula, we could minimize the memory bandwidth. and, at the same time, we could achieve better rendering performance.

  • PDF