• Title/Summary/Keyword: jittered sampling

Search Result 2, Processing Time 0.016 seconds

High Quality Volume Rendering Using the Empty Space Jittering and the Sampling Alignment Method (빈공간 교란과 샘플링 위치 정렬을 이용한 고화질 볼륨 가시화)

  • Kye, Heewon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.7
    • /
    • pp.852-861
    • /
    • 2013
  • When users use medical volume rendering applications, selecting specific region of volume data and observing the region by magnification is a common process.As the wood-grain artifact is arise from the magnified image, the jittered sampling technique has been used to remove the problem. However, the jittered sampling leads to some noise along the volume edge. In this research, we reveal the reason of the noise, and present a solution. To remove the wood-grain artifact without the noise, we propose the empty space jittering and the sampling alignment method. Using these methods, we can produce high quality volume rendering images without noticeable time consuming.

Realistic and Fast Depth-of-Field Rendering in Direct Volume Rendering (직접 볼륨 렌더링에서 사실적인 고속 피사계 심도 렌더링)

  • Kang, Jiseon;Lee, Jeongjin;Shin, Yeong-Gil;Kim, Bohyoung
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.75-83
    • /
    • 2019
  • Direct volume rendering is a widely used method for visualizing three-dimensional volume data such as medical images. This paper proposes a method for applying depth-of-field effects to volume ray-casting to enable more realistic depth-of-filed rendering in direct volume rendering. The proposed method exploits a camera model based on the human perceptual model and can obtain realistic images with a limited number of rays using jittered lens sampling. It also enables interactive exploration of volume data by on-the-fly calculating depth-of-field in the GPU pipeline without preprocessing. In the experiment with various data including medical images, we demonstrated that depth-of-field images with better depth perception were generated 2.6 to 4 times faster than the conventional method.