• Title/Summary/Keyword: GPU-based rendering

Search Result 88, Processing Time 0.023 seconds

Efficient GPU Framework for Adaptive and Continuous Signed Distance Field Construction, and Its Applications

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.63-69
    • /
    • 2022
  • In this paper, we propose a new GPU-based framework for quickly calculating adaptive and continuous SDF(Signed distance fields), and examine cases related to rendering/collision processing using them. The quadtree constructed from the triangle mesh is transferred to the GPU memory, and the Euclidean distance to the triangle is processed in parallel for each thread by using it to find the shortest continuous distance without discontinuity in the adaptive grid space. In this process, it is shown through experiments that the cut-off view of the adaptive distance field, the distance value inquiry at a specific location, real-time raytracing, and collision handling can be performed quickly and efficiently. Using the proposed method, the adaptive sign distance field can be calculated quickly in about 1 second even on a high polygon mesh, so it is a method that can be fully utilized not only for rigid bodies but also for deformable bodies. It shows the stability of the algorithm through various experimental results whether it can accurately sample and represent distance values in various models.

Efficient Data Reduction for Point-Based Rendering using Extended QEM (효율적인 점 기반 렌더링을 위한 확장 이차 오류 척도 기반의 간략화 방법 개발)

  • Kim Duck-bong;Kang Eui-chul;Lee Kwan H.;Pajarola Renato B.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.712-714
    • /
    • 2005
  • 본 논문은 효율적인 점 기반 렌더링(Point-based Rendering)을 위해 확장 이차 오류 척도(Quadric Error Metrics) 기법을 이용하는 간략화 알고리즘을 제안한다. 점 기반 렌더링의 기본 개념은 자유곡면을 메쉬와 같은 연결정보 없이 직접 점들로 표현하고, 렌더링하는 것이다. 확장 이차 오류 척도 기법은 메쉬를 간략화 하는데 있어 기하 정보뿐만 아니라 색상, 텍스쳐 좌표 정보까지 고려하여 간략화 하는 알고리즘이다. 이 연구는 3차원 점 데이터로부터 복원한 폴리곤 메쉬 모델로부터 효율적인 점 기반 렌더링(Point-based Rendering)을 위해 기하 정보 및 색상 정보까지 고려하여 원본 점 데이터를 간략화 하는 저용량의 효율적인 점 기반 렌더링 알고리즘을 제안하고, GPU 기반 렌더링 결과를 보였다.

  • PDF

DMGL: An OpenGL ES Based Mobile 3D Rendering Libraries (DMGL: OpenGL ES 기반 모바일 3D 렌더링 라이브러리)

  • Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1160-1168
    • /
    • 2008
  • Recent technological innovations of mobile hardware which make it possible to implement real-time 3D rendering effects under mobile environment have provided a potential to develop realistic mobile application programs. This paper presents platform independent, OpenGL ES based, real-time mobile rendering libraries, called DMGL for supporting high quality 3D rendering on handhold devices. The libraries allows the programmers who develops mobile graphics softwares to generate varying advanced real-time 3D graphics effects without great effort. Moreover, GPGPU-based libraries give a set of functions to solve complex equations for simulating natural phenomena such as smoke and fire, and to render the results in real-time.

  • PDF

Realistic and Fast Depth-of-Field Rendering in Direct Volume Rendering (직접 볼륨 렌더링에서 사실적인 고속 피사계 심도 렌더링)

  • Kang, Jiseon;Lee, Jeongjin;Shin, Yeong-Gil;Kim, Bohyoung
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.75-83
    • /
    • 2019
  • Direct volume rendering is a widely used method for visualizing three-dimensional volume data such as medical images. This paper proposes a method for applying depth-of-field effects to volume ray-casting to enable more realistic depth-of-filed rendering in direct volume rendering. The proposed method exploits a camera model based on the human perceptual model and can obtain realistic images with a limited number of rays using jittered lens sampling. It also enables interactive exploration of volume data by on-the-fly calculating depth-of-field in the GPU pipeline without preprocessing. In the experiment with various data including medical images, we demonstrated that depth-of-field images with better depth perception were generated 2.6 to 4 times faster than the conventional method.

Mesh-based Marching Cubes on the GPU (메시 기반 GPU 마칭큐브)

  • Kim, Hyunjun;Kim, Dohoon;Kim, Minho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.1-8
    • /
    • 2018
  • We propose a modified real-time marching cubes technique that extracts isosurfaces in the form of connected meshes instead of triangle soup. In this way, a various mesh-based isosurface rendering techniques can be implemented and additional information of the isosurfaces such as its topology can be extracted in real-time. In addition, we propose a real-time technique to extract adjacency-triangle structure for geometry shaders that can be used for various shading effects such as silhouette rendering. Compared with the previous technique that welds the output triangles of classical marching cubes, our technique shows up to 300% performance improvement.

GPU-based multi-vision system using randomly-ordered rendering method (임의 순서 렌더링 방법을 이용한 GPU 기반 멀티비전 시스템)

  • Kim, Sungjei;Huh, Jingang;Kim, Je Woo;Kim, Yong-Hwan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.227-228
    • /
    • 2017
  • 8K급 이상의 초고해상도/초다시점/초대용량 콘텐츠의 성공적인 시장 보급을 위해서는 콘텐츠의 실시간 재생이 가능한 단일 재생 시스템이 필요한 상황이지만, 현존하는 기술로는 해당 요구 사항을 만족하기는 어려운 상황이다. 이에 본 논문에서는 현존하는 재생 기술 기반으로 8K급 이상의 초고해상도를 갖는 콘텐츠를 효과적으로 재생하기 위한 GPU 기반의 멀티비전 시스템과 디스플레이 화면 간 안정된 동기 재생을 지원하기 위한 임의 순서 렌더링 방법을 제안한다.

  • PDF

An Efficient Real-Time Rendering of Large Molecular Models based on GPU (GPU 기반의 효율적인 거대 분자의 실시간 렌더링 기법)

  • Lee, Jun;Park, Sung-Jun;Kim, Jee-In
    • Journal of the Korea Computer Graphics Society
    • /
    • v.11 no.3
    • /
    • pp.19-22
    • /
    • 2005
  • 정보생물학 분야에 있어서 분자 구조를 3차원으로 렌더링하여 보여주는 것은 매우 중요한 작업이다. 특히 분자의 표면 렌더링은 분자의 3차원 구조 분석 등에 중요하게 사용된다. 그러나 분자 표면 렌더링을 수행하기 위해서는 많은 양의 폴리곤이 필요하게 된다. 특히 대장균 바이러스와 같은 분자량이 많은 거대 분자를 자연스럽게 렌더링 하기 위해서는 고가의 그래픽 전용 워크스테이션을 사용해야 한다. 본 논문에서는 저렴한 일반 PC 급 시스템에서도 거대 분자를 무리 없이 렌더링 할 수 있는 효율적인 알고리즘을 제안하였다. 제안하는 알고리즘은 높은 속도와 좋은 화질을 유지할 수 있는 Hybrid Point & Polygon 렌더링 기법이다. 이 알고리즘은 계층적인 자료구조인 옥트리(Octree)를 사용하였으며 최적의 성능을 내기 위하여 GPU가 작업을 처리한다. 제안된 알고리즘의 성능 평가는 일반 PC급에서 수행되었으며 특히 그래픽 카드 2개를 병렬로 연결하여 높은 성능을 낼 수 있는 SLI(Scalable Link Interface) 환경에서 평가를 수행하였다.

  • PDF

Adaptive Foveated Ray Tracing Based on Time-Constrained Rendering for Head-Mounted Display (헤드 마운티드 디스플레이를 위한 시간 제약 렌더링을 이용한 적응적 포비티드 광선 추적법)

  • Kim, Youngwook;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.113-123
    • /
    • 2022
  • Ray tracing-based rendering creates by far more realistic images than the traditional rasterization-based rendering. However, it is still burdensome when implemented for a Head-Mounted Display (HMD) system that demands a wide field of view and a high display refresh rate. Furthermore, for presenting high-quality images on the HMD screen, a sufficient number of ray sampling should be carried out per pixel to alleviate visually annoying spatial and temporal aliases. In this paper, we extend the recent selective foveated ray tracing technique by Kim et al. [1], and propose an improved real-time rendering technique that realizes the rendering effect of the classic Whitted-style ray tracing on the HMD system. In particular, by combining the ray tracing hardware-based acceleration technique and time-constrained rendering scheme, we show that fast HMD ray tracing is possible that is well suited to human visual systems.

Real-time Stereo Video Generation using Graphics Processing Unit (GPU를 이용한 실시간 양안식 영상 생성 방법)

  • Shin, In-Yong;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.596-601
    • /
    • 2011
  • In this paper, we propose a fast depth-image-based rendering method to generate a virtual view image in real-time using a graphic processor unit (GPU) for a 3D broadcasting system. Before the transmission, we encode the input 2D+depth video using the H.264 coding standard. At the receiver, we decode the received bitstream and generate a stereo video using a GPU which can compute in parallel. In this paper, we apply a simple and efficient hole filling method to reduce the decoder complexity and reduce hole filling errors. Besides, we design a vertical parallel structure for a forward mapping process to take advantage of the single instruction multiple thread structure of GPU. We also utilize high speed GPU memories to boost the computation speed. As a result, we can generate virtual view images 15 times faster than the case of CPU-based processing.

Simulation of Deformable Objects using GLSL 4.3

  • Sung, Nak-Jun;Hong, Min;Lee, Seung-Hyun;Choi, Yoo-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.4120-4132
    • /
    • 2017
  • In this research, we implement a deformable object simulation system using OpenGL's shader language, GLSL4.3. Deformable object simulation is implemented by using volumetric mass-spring system suitable for real-time simulation among the methods of deformable object simulation. The compute shader in GLSL 4.3 which helps to access the GPU resources, is used to parallelize the operations of existing deformable object simulation systems. The proposed system is implemented using a compute shader for parallel processing and it includes a bounding box-based collision detection solution. In general, the collision detection is one of severe computing bottlenecks in simulation of multiple deformable objects. In order to validate an efficiency of the system, we performed the experiments using the 3D volumetric objects. We compared the performance of multiple deformable object simulations between CPU and GPU to analyze the effectiveness of parallel processing using GLSL. Moreover, we measured the computation time of bounding box-based collision detection to show that collision detection can be processed in real-time. The experiments using 3D volumetric models with 10K faces showed the GPU-based parallel simulation improves performance by 98% over the CPU-based simulation, and the overall steps including collision detection and rendering could be processed in real-time frame rate of 218.11 FPS.