• Title/Summary/Keyword: 재렌더링

Search Result 36, Processing Time 0.028 seconds

Advanced Pre-Integrated BRDF for Realistic Transmission Light Color in Skin Rendering based on Unity3D (Unity3D기반 피부 투과광의 사실적 색표현을 위한 개선된 사전정의 BRDF)

  • Kim, Seong-Hoon;Moon, Yoon-Young;Choi, Jin-Woo;Yang, Young-Kyu;Han, Gi-Tae
    • Annual Conference of KIPS
    • /
    • 2014.04a
    • /
    • pp.840-843
    • /
    • 2014
  • 사실적 피부 렌더링은 피부 표면에서 일어나는 확산반사(Diffusion) 및 경면반사(Specular) 뿐 만 아니라 피부층 내에서 산란되어 나오는 산란광과 얇은 피부층을 통과하는 투과광 등을 고려하여 렌더링 되어야 한다. 이를 물리적인 개념들을 사용하여 실시간으로 계산하여 표현하는 것은 많은 계산량과 시간을 필요로 하므로 확산 반사 및 경면 반사 등을 미리 계산하여 텍스쳐로 저장하고 재사용하는 사전정의 BRDF 방법으로 근사화하여 표현할 수 있다. 하지만 사전정의 BRDF를 통해 생성된 피부 투과광색상 텍스쳐 맵은 그 색상이 고정되어있어 조명의 색상이 바뀌어도 피부를 투과하는 빛의 색상이 변하지 않아 부자연스러움을 보인다. 본 논문에서는 이러한 문제를 해결하기 위해 물체와 조명간의 거리를 이용하여 빛의 감쇠비율을 구하고 조명의 색상 값과 감쇠비율을 이용하여 피부 투과광 색상 텍스쳐 맵의 RGB채널 수정을 통해 피부 렌더링에서의 자연스러운 투과광 표현이 가능함을 보였다.

Research on the Productions of Analog Pens within the Smart Media (스마트 미디어에서의 아날로그 펜화기법 제작 연구 -어플리케이션 "스케치 플러스"를 중심으로)

  • Yoon, Dong-Joon;Oh, Seung-Hwan
    • Journal of Digital Convergence
    • /
    • v.14 no.12
    • /
    • pp.413-421
    • /
    • 2016
  • This research, mainly focused on the development of the iPhone exclusive app 'Sketch Plus', is on the production of pens within the smart media. Our goal is to develop and reproduce pens with surreal rendering as a base and during that process, describe the fusing process of design perspectives and algorithms. This research comprehends the concept of surreal rendering, which is a technique that mimics traditional art forms, and suggests 15 pen techniques and ways to display them by analyzing previous research on smart media. We have described the process of using hatching patterns and resused to solve lag problems that occur during the reproduction of pen techniques due to the limitations of smart devices and organized the conversion process of pen patterns into 4 steps: rough sketch, contrast, applying and mimicking patterns, and applying color. We hope that this research on the reproduction of analog pen techniques can be used as an example for production on fused surreal rendering.

Force Shading using Height Map for Virtual Tak-bon Simulation (가상 탁본 시뮬레이션의 Height Map을 이용한 힘 쉐이딩)

  • Park, Ye-Seul;Park, Jin-Ah
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.590-594
    • /
    • 2008
  • 근래에 인간과 컴퓨터의 상호작용을 통하여 사용자에게 직관적인 정보를 제공하는 기술들이 발전하고 있으며, 그래픽 기술의 비실사 렌더링을 이용한 미술 기법을 사실감 있게 가상 체험하기 위한 어플리케이션이 제안되고 있다. 본 논문은 미술 기법 중 방망이를 이용한 탁본 기법을 가상의 환경에서 모사하기 위해 탁본의 방망이를 통한 힘 쉐이딩을 새롭게 고안하여 제안한다. 햅틱 커서의 포인트와는 달리 탁본 방망이의 면적이 접촉하는 부분에서 생기는 문제점을 해결하기 위하여 Height map으로 사용된 Canny Edge Detection 이미지를 통해 Height map을 부분적으로 재 정의하고 힘의 계산에 적용하여 충돌된 방망이의 힘 쉐이딩을 가능하게 하는 것이 원리이다. 그래픽 렌더링 효과와 함께 실시간으로 사용자에게 햅틱 장치를 이용하여 촉감 정보를 전달함으로써 다양한 미술 교육적 효과를 체험할 수 있는 방안을 제공할 것으로 기대된다.

  • PDF

Template-Based Object-Order Volume Rendering with Perspective Projection (원형기반 객체순서의 원근 투영 볼륨 렌더링)

  • Koo, Yun-Mo;Lee, Cheol-Hi;Shin, Yeong-Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.7
    • /
    • pp.619-628
    • /
    • 2000
  • Abstract Perspective views provide a powerful depth cue and thus aid the interpretation of complicated images. The main drawback of current perspective volume rendering is the long execution time. In this paper, we present an efficient perspective volume rendering algorithm based on coherency between rays. Two sets of templates are built for the rays cast from horizontal and vertical scanlines in the intermediate image which is parallel to one of volume faces. Each sample along a ray is calculated by interpolating neighboring voxels with the pre-computed weights in the templates. We also solve the problem of uneven sampling rate due to perspective ray divergence by building more templates for the regions far away from a viewpoint. Since our algorithm operates in object-order, it can avoid redundant access to each voxel and exploit spatial data coherency by using run-length encoded volume. Experimental results show that the use of templates and the object-order processing with run-length encoded volume provide speedups, compared to the other approaches. Additionally, the image quality of our algorithm improves by solving uneven sampling rate due to perspective ray di vergence.

  • PDF

Memory Efficient Parallel Ray Casting Algorithm for Unstructured Grid Volume Rendering on Multi-core CPUs (비정렬 격자 볼륨 렌더링을 위한 다중코어 CPU기반 메모리 효율적 광선 투사 병렬 알고리즘)

  • Kim, Duksu
    • Journal of KIISE
    • /
    • v.43 no.3
    • /
    • pp.304-313
    • /
    • 2016
  • We present a novel memory-efficient parallel ray casting algorithm for unstructured grid volume rendering on multi-core CPUs. Our method is based on the Bunyk ray casting algorithm. To solve the high memory overhead problem of the Bunyk algorithm, we allocate a fixed size local buffer for each thread and the local buffers contain information of recently visited faces. The stored information is used by other rays or replaced by other face's information. To improve the utilization of local buffers, we propose an image-plane based ray grouping algorithm that makes ray groups have high coherency. The ray groups are then distributed to computing threads and each thread processes the given groups independently. We also propose a novel hash function that uses the index of faces as keys for calculating the buffer index each face will use to store the information. To see the benefits of our method, we applied it to three unstructured grid datasets with different sizes and measured the performance. We found that our method requires just 6% of the memory space compared with the Bunyk algorithm for storing face information. Also it shows compatible performance with the Bunyk algorithm even though it uses less memory. In addition, our method achieves up to 22% higher performance for a large-scale unstructured grid dataset with less memory than Bunyk algorithm. These results show the robustness and efficiency of our method and it demonstrates that our method is suitable to volume rendering for a large-scale unstructured grid dataset.

A Reference Frame Selection Method Using RGB Vector and Object Feature Information of Immersive 360° Media (실감형 360도 미디어의 RGB 벡터 및 객체 특징정보를 이용한 대표 프레임 선정 방법)

  • Park, Byeongchan;Yoo, Injae;Lee, Jaechung;Jang, Seyoung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1050-1057
    • /
    • 2020
  • Immersive 360-degree media has a problem of slowing down the video recognition speed when the video is processed by the conventional method using a variety of rendering methods, and the video size becomes larger with higher quality and extra-large volume than the existing video. In addition, in most cases, only one scene is captured by fixing the camera in a specific place due to the characteristics of the immersive 360-degree media, it is not necessary to extract feature information from all scenes. In this paper, we propose a reference frame selection method for immersive 360-degree media and describe its application process to copyright protection technology. In the proposed method, three pre-processing processes such as frame extraction of immersive 360 media, frame downsizing, and spherical form rendering are performed. In the rendering process, the video is divided into 16 frames and captured. In the central part where there is much object information, an object is extracted using an RGB vector per pixel and deep learning, and a reference frame is selected using object feature information.

Shadow Texture Generation Using Temporal Coherence (시간일관성을 이용한 그림자 텍스처 생성방법)

  • Oh Kyoung-su;Shin Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1550-1555
    • /
    • 2004
  • Shadows increase the visual realism of computer-generated images and they are good hint for spatial relationships between objects. Previous methods to produce a shadow texture for an object are to render all objects between the object and light source. Consequently entire time for generating shadow textures between all objects is Ο(Ν$^2$), where Ν is the number of objects. We propose a novel shadow texture generation method with constant processing time for each object using shadow depth buffet. In addition, we also present method to achieve further speed-up using temporal coherence. If the transition between dynamic and static state is not frequent, depth values of static objects does not vary significantly. So we can reuse the depth value for static objects and render only dynamic objects.

  • PDF

Timing System for 3D Animation Production (3차원 애니메이션을 위한 타이밍 시스템 구현)

  • Song, Wan-Seo;Kyung, Min-Ho;Suk, Hae-Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.836-842
    • /
    • 2006
  • 3D 애니메이션 제작에서 동작의 타이밍(예를 들면 timing&spacing, slow-in, slow-out)은 연기의 의미와 느낌을 정확히 표현하기 위한 매우 중요한 요소 중의 하나이다. 따라서 이러한 타이밍의 편집은 애니메이션 작업에서 필수적이라고 할 수 있는데, 이를 기존의 3D 애니메이션 시스템에서 수행하기에는 기술적으로 많은 어려움이 있었다. 첫째로 타이밍의 편집은 시간축 자체를 변형하는 문제이기 때문에 보간 곡선에 대한 재매개변수화가 필요한데, 이러한 가능은 기존 애니메이션 시스템에서 제공되지 않는다. 둘째로 타이밍 편집에는 종종 애니메이션 감독이 직접 참여하기도 하는데, 일반적으로 3D 애니메이션 시스템의 사용에 익숙하지 않기 때문에 원하는 결과를 직접 만들어 보기가 어려웠다. 본 논문에서는 이러한 문제들을 해결한 새로운 애니메이션 타이밍 시스템을 구현하였다. 이 시스템은 렌더링된 영상파일들과 애니메이션 장면 파일을 입력 받아 사용자가 타이밍 편집을 하고, 그 결과를 애니메이션 장면 파일에 다시 기록하는 방식으로 구현된다. 타이밍 편집은 기존 셀 애니메이션 제작 방식과 유사한 방식으로 프레임을 삽입하거나 삭제하는 가능과 시간왜곡 (time-warping) 그래프를 직접 조정하여 타이밍을 조정하는 가능을 제공한다. 전자는 제작도구에 익숙하지 않은 감독이나 셀 애니메이션 작업자들이 직관적으로 사용할 수 있는 기능이고, 후자는 좀 더 세밀한 타이밍 조정을 위해 제공하는 가능이다. 사용자가 편집한 타이밍 결과는 각 동작변수의 보간곡선을 재매개변수화하여 애니메이션 파일에 기록된다. 본 논문에서 구현한 시스템은 실제 애니메이션 제작에 보편적으로 사용되는 마야 애니메이션 파일을 지원하도록 구현되었다.

  • PDF

Virtual pencil and airbrush rendering algorithm using particle patch (입자 패치 기반 가상 연필 및 에어브러시 가시화 알고리즘)

  • Lee, Hye Rin;Oh, Geon;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.101-109
    • /
    • 2018
  • Recently, the improvement of virtual reality and augmented reality technologies leverages many new technologies like the virtual study room, virtual architecture room. Such virtual worlds require free handed drawing technology such as writing descriptions of formula or drawing blue print of buildings. In nature, lots of view point modifications occur when we walk around inside the virtual world. Especially, we often look some objects from near to far distance in the virtual world. Traditional drawing methods like using fixed size image for drawing unit is not produce acceptable result because they generate blurred and jaggy result as view distance varying. We propose a novel method which robust to the environment that produce lots of magnifications and minimizations like the virtual reality world. We implemented our algorithm both two dimensional and three dimensional devices. Our algorithm does not produce any artifacts, jaggy or blurred result regardless of scaling factor.

Application of 3D Digital Documentation to Natural Monument Fossil Site (천연기념물 화석산지의 3차원 디지털 기술 적용)

  • Kong, Dal-Yong;Lim, Jong-Deock;Wohn, Kwang-Yeon;Ahn, Jae-Hong;Kim, Kyung-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.11
    • /
    • pp.492-502
    • /
    • 2011
  • 20 fossil sites of numerous fossil sites in Korea have been designated as Natural Monument for protection and conservation. Many of the sites which is located at the coastal area have been gradually disfigured by natural weathering, erosion and human activity. Thus the conservation of the original form and the documentation for the original figure are necessary. In this study, we applied 3D digital documentation to Natural Monument No. 394, Haenam Uhangri dinosaur, pterosaur, and bird footprint fossil site, for maintaining the original form of the dinosaur footprints. We were able to obtain the 3D digital data on two dinosaur footprint sites, a high resolution distributional map, and more accurate digital data of the dinosaur footprints applied the rendering method by ambient occlusion. 3D digital data on the dinosaur footprints is worth for the conservation and research data, moreover content for applying to the various fields such as to make 3D brochure, interactive contents, and so on.