• Title/Summary/Keyword: Color Rendering

Search Result 252, Processing Time 0.029 seconds

Enhancement Method of Depth Accuracy in DIBR-Based Multiview Image Generation (다시점 영상 생성을 위한 DIBR 기반의 깊이 정확도 향상 방법)

  • Kim, Minyoung;Cho, Yongjoo;Park, Kyoung Shin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.9
    • /
    • pp.237-246
    • /
    • 2016
  • DIBR (Depth Image Based Rendering) is a multimedia technology that generates the virtual multi-view images using a color image and a depth image, and it is used for creating glasses-less 3-dimensional display contents. This research describes the effect of depth accuracy about the objective quality of DIBR-based multi-view images. It first evaluated the minimum depth quantization bit that enables the minimum distortion so that people cannot recognize the quality degradation. It then presented the comparative analysis of non-uniform domain-division quantization versus regular linear quantization to find out how effectively express the accuracy of the depth information in same quantization levels according to scene properties.

Voxel-wise UV parameterization and view-dependent texture synthesis for immersive rendering of truncated signed distance field scene model

  • Kim, Soowoong;Kang, Jungwon
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.51-61
    • /
    • 2022
  • In this paper, we introduced a novel voxel-wise UV parameterization and view-dependent texture synthesis for the immersive rendering of a truncated signed distance field (TSDF) scene model. The proposed UV parameterization delegates a precomputed UV map to each voxel using the UV map lookup table and consequently, enabling efficient and high-quality texture mapping without a complex process. By leveraging the convenient UV parameterization, our view-dependent texture synthesis method extracts a set of local texture maps for each voxel from the multiview color images and separates them into a single view-independent diffuse map and a set of weight coefficients for an orthogonal specular map basis. Furthermore, the view-dependent specular maps for an arbitrary view are estimated by combining the specular weights of each source view using the location of the arbitrary and source viewpoints to generate the view-dependent textures for arbitrary views. The experimental results demonstrate that the proposed method effectively synthesizes texture for an arbitrary view, thereby enabling the visualization of view-dependent effects, such as specularity and mirror reflection.

Fog Rendering Using Distance-Altitude Scattering Model on 2D Images

  • Lee, Ho-Chang;Jang, Jaeni;Yoon, Kyung-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.12
    • /
    • pp.1528-1535
    • /
    • 2011
  • We present a fog generation algorithm in 2D images. The proposed algorithm provides a scattering model for the approximated calculation of fog density. The scattering model needs parameters of distance and altitude information. However, 2D images do not include that information, so that we calculate them from the depth information generated in an interactive manner, and estimate the scattering factor by using the scattering model. Then we generate fog effect on an input image using the scattering factor by distance-oriented selection blur and color blending. With the algorithm, we can easily create the fog-effected images and fog generated animation from 2D images.

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.

Evaluation of Artificial Intelligence-Based Denoising Methods for Global Illumination

  • Faradounbeh, Soroor Malekmohammadi;Kim, SeongKi
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.737-753
    • /
    • 2021
  • As the demand for high-quality rendering for mixed reality, videogame, and simulation has increased, global illumination has been actively researched. Monte Carlo path tracing can realize global illumination and produce photorealistic scenes that include critical effects such as color bleeding, caustics, multiple light, and shadows. If the sampling rate is insufficient, however, the rendered results have a large amount of noise. The most successful approach to eliminating or reducing Monte Carlo noise uses a feature-based filter. It exploits the scene characteristics such as a position within a world coordinate and a shading normal. In general, the techniques are based on the denoised pixel or sample and are computationally expensive. However, the main challenge for all of them is to find the appropriate weights for every feature while preserving the details of the scene. In this paper, we compare the recent algorithms for removing Monte Carlo noise in terms of their performance and quality. We also describe their advantages and disadvantages. As far as we know, this study is the first in the world to compare the artificial intelligence-based denoising methods for Monte Carlo rendering.

Full color reflective cholesteric liquid cystal using photosensitive chiral dopant (감광성 도판트를 이용한 풀컬러 구현 가능 반사형 콜레스테릭 액정)

  • Park, Seo-Kyu;Cho, Hee-Seok;Kwon, Soon-Bum;Kim, Jeong-Soo;Reznikov, Yu.
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2007.11a
    • /
    • pp.394-395
    • /
    • 2007
  • In order to make full color cholesteric displays, color filter-less R, G, B sub-pixel structured cholesteric LC cells have been studied. To make R, G, B colors, UV induced pitch variant chiral dopant was added to cholesteric LC mixtures. The concentration of the photo-sensitive chiral dopant was adjusted so that the initial state showed blue color and the color was changed from blue to green and red with increase of UV irradiation to the cholesteric cells. To prevent the mixing of R, G, B reflective sub-pixel liquid crystals, separation walls were formed using negative photo resister in boundary area between sub-pixels. Through the optimization of the material concentrations and UV irradiation condition, vivid R, G, B colors were achieved.

  • PDF

Color2Gray using Conventional Approaches in Black-and-White Photography (전통적 사진 기법에 기반한 컬러 영상의 흑백 변환)

  • Jang, Hyuk-Su;Choi, Min-Gyu
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.3
    • /
    • pp.1-9
    • /
    • 2008
  • This paper presents a novel optimization-based saliency-preserving method for converting color images to grayscale in a manner consistent with conventional approaches of black-and-white photographers. In black-and-white photography, a colored filter called a contrast filter has been commonly employed on a camera to lighten or darken selected colors. In addition, local exposure controls such as dodging and burning techniques are typically employed in the darkroom process to change the exposure of local areas within the print without affecting the overall exposure. Our method seeks a digital version of a conventional contrast filter to preserve visually-important image features. Furthermore, conventional burning and dodging techniques are addressed, together with image similarity weights, to give edge-aware local exposure control over the image space. Our method can be efficiently optimized on GPU. According to the experiments, CUDA implementation enables 1 megapixel color images to be converted to grayscale at interactive frames rates.

  • PDF

A Simple and Efficient Antialiasing Method with the RUF buffer (RUF 버퍼를 이용한 간단하고 효율적인 안티알리아싱 기법)

  • 김병욱;박우찬;양성봉;한탁돈
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.3_4
    • /
    • pp.205-212
    • /
    • 2003
  • In this paper, we propose a simple and efficient hardware-supported antialiasing algorithm and its rendering scheme. The proposed method can efficiently reduce the required memory bandwidth as well as memory size compared to a conventional supersampling when rendering 3D models. In addition, it can provide almost the same high quality scenes as supersampling does. In this paper, we have introduced the RUF (Recently Used Fragment) buffer that stores some or whole parts of a fragment or two more the merged results of fragments that recently used in color calculation. We have also proposed a color calculation algorithm to deteriorate the image quality as referencing the RUF buffer. Because of the efficiency presented in the proposed algorithm, the more number of sampling points increases the more memory saving ratio we can gain relative to the conventional supersampling. In our simulation, the proposed method can reduce the amount of memory size by 31% and the memory bandwidth by 11% with a moderate pixel color difference of 1.3% compared to supersampling for 8 sparse sampling points.

Effect of Glass Composition on the Optical Properties of Color Conversion Glasses for White LED (유리조성에 따른 백색 LED용 색변환 유리의 광특성)

  • Huh, Cheolmin;Hwang, Jonghee;Lim, Tae-Young;Kim, Jin-Ho;Lee, MiJai;Yoo, Jong-Sung;Park, Tae-Ho;Moon, Jooho
    • Korean Journal of Materials Research
    • /
    • v.22 no.12
    • /
    • pp.669-674
    • /
    • 2012
  • Yellow phosphor dispersed color conversion glasses are promising phosphor materials for white LED applications because of their good thermal durability, chemical stability, and anti-ultraviolet property. Six color conversion glasses were prepared with high Tg and low Tg specimens of glass. Luminous efficacy, luminance, CIE (Commission Internationale de l'Eclairage) chromaticity, CCT (Correlated Color Temperature), and CRI (Color Rendering Index) of the color conversion glasses were analyzed according to the PL spectrum. Color conversion glasses with high Tg glass frit, sintered at higher temperature, showed better luminous properties than did color conversion glasses with low Tg glass frit. The characteristics of the color conversion glass depended on the glass composition rather than on the sintering temperature. The XRD peaks of the YAG phosphor disappeared in the color conversion glass with major components of $B_2O_3$-ZnO-$SiO_2$-CaO and, in the XRD results, new crystalline peaks of $BaSi_2O_5$ appeared in the color conversion glass with major components of $Bi_2O_3$-ZnO-$B_2O_3$-MgO. The characteristics of CIE chromaticity, CCT, and the CRI of low Tg color conversion glasses showed worse color properties than those of high Tg color conversion glasses. However, these color characteristics of low Tg glasses were improved by thickness variation. So color conversion glasses with good characteristics of both luminous and color properties were attained.

A Study on Construction and its Application of Multichannel Type Spectroradiometer (Multichannel Type 분광방사측정 시스템의 제작 및 응용에 관한 연구)

  • 성연국;백운식
    • The Proceedings of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.10 no.2
    • /
    • pp.54-62
    • /
    • 1996
  • A multichannel type spectroradiometer which can measure the optical characteristics of light sources was constructed. Our system can be used to measure the optical characteristics of light sources of which the wavelengths are ranging from ultraviolet to infrared(220nm~1100nm)in 16msec. The optical characteristics such as color coordinates, color rendering index, brightness, color difference, etc. was measured and analyzed.

  • PDF