• Title/Summary/Keyword: 사실적인 렌더링

Search Result 171, Processing Time 0.029 seconds

A Real-time Soft Shadow Rendering Method under the Area Lights having an Arbitrary Shape (임의의 모양을 가지는 면광원 하의 실시간 부드러운 그림자 생성 방법)

  • Chun, Youngjae;Oh, Kyoungsu
    • Journal of Korea Game Society
    • /
    • v.14 no.2
    • /
    • pp.77-84
    • /
    • 2014
  • Presence of soft shadow effects from an area light makes virtual scenes look more realistic. However, since computation of soft shadow effects takes a long time, acceleration methods are required to apply it to real-time 3D applications. Many researches assumed that area lights are white rectangles. We suggest a new method which renders soft shadows under the area light source having arbitrary shape and color. In order to approximate visibility test, we use a shadow mapping result near a pixel. Complexity of shadow near a pixel is used to determine degree of precision of our visibility estimation. Finally, our method can present more realistic soft shadows for the area light that have more general shape and color in real-time.

An Efficient Framework for Making Spatial Augmented Reality Digital Contents (공간 증강 현실 디지털 콘텐츠 제작을 위한 효율적인 프레임워크)

  • Chun, Young-Jae;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.13 no.3
    • /
    • pp.77-84
    • /
    • 2013
  • We introduce a new framework for making spatial augmented reality contents fast and with low cost. The framework allows us to make projection-based augmented reality using off-the-shelf webcams and projectors. Contents producers can make a projection-based augmented reality contents easy and fast by setting a webcam and a projector and then controlling user interfaces of our framework. Since most of previous solutions are expensive and it is too difficult that the producers apply augmented reality techniques themselves, the framework helps them to concentrate contents. Once we set a webcam and a projector correctly, our framework projects a virtual object contents according to the manipulation of a trackable real object. As a result we can see realistic augmented reality contents.

An Improved PCF Technique for The Generation of Shadows (그림자생성을 위한 개선된 PCF 기법)

  • Yu, Young-Jung;Choi, Jin-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.8
    • /
    • pp.1442-1449
    • /
    • 2007
  • Shadows are important elements for realistic rendering of the 3D scene. We cannot recognize the distance of objects in the 3D scene without shadows. Two methods, image-based medthods and object-based methods, are largely used for the rendering of shadows. Object based methods can generate accurate shadow boundaries. However, it cannot be used to generate the realtime shadows because the time complexity defends on the complexity of the 3D scene. Image based methods which are techniques to generate shadows are widely used because of fast calculation time. However, this algorithm has aliasing problems. PCF is a method to solve the aliasing problem. Using PCF technique, antialiased shadow boundaries can be generated. However, PCF with large filter size requires more time to calculate antialiased shadow boundaries. This paper proposes an improved PCF technique which generates antialiased shadow boundaries similar to that of PCF. Compared with PCF, this technique can generate antialiased shadows in less time.

Photoscan method for Achieving the Photorealistic Facial Modeling and Rendering (Photoscan 방식의 살시적인 페이셜 모델링 및 렌더링 제작 방법)

  • Zhang, Qi;Fu, Linwei;Jiang, Haitao;Ji, Yun;Qu, Lin;Yun, Taesoo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.51-52
    • /
    • 2018
  • 사실감 있는 디지털 캐릭터를 만드는 것은 3D 영역에서 가장 어려운 도전 중에 하나이다. 특히 얼굴의 특징을 만들기 어렵기 때문에 얼굴포착기술이 점점 보편화되어가고 있다. 본문에서도 Photo Scan의 얼굴포착으로 고품질의 디지털 캐릭터 모델을 얻는 방법을 제시하였다. 이 방법은 시간을 단축시킬 수 있을 뿐 아니라 효율이 높아서 이 작업 과정이 고품질의 디지털 캐릭터 모델을 얻는 데 매우 유용하다.

  • PDF

An Enhancement Technique for Separation of Direct Light and Global Light Using High Frequency Illumination pattern (고주파 조명패턴을 사용한 직접광과 간접광의 분리성능 향상 기법)

  • Jo, Mi-Ri-Na;Park, Dong-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.9
    • /
    • pp.1262-1272
    • /
    • 2009
  • In computer graphics, there exist many studies about illumination and radiance for a realistic description of the 3D modeling and rendering. When we see a scene, the scene is lit by a source of light and the radiance of the points by a source in the scene. The radiance has direct light and glight component. The direct light gets lights directly from light source, but the global light gets lights indirectly by interreflections among complicated geometrical components. In this paper, we studied a method for increasing the accuracy of separating direct light and global light components from a scene by using high frequency illumination pattern. For experiments, we applied the separating method of Nayar's and found the best configurations for the separation through the experiments. We improved the separation accuracy of direct and global light by measuring the value of unilluminated area, which depends on the characteristics of object. Furthermore, we enhanced invisible scene of the global light by applying the image filtering technique.

  • PDF

Shading Algorithm Evaluation based on User Perception (사용자 인지 실험 기반 쉐이딩 알고리즘 평가)

  • Byun, Hae-Won;Park, Yun-Young
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.6
    • /
    • pp.106-115
    • /
    • 2011
  • In this paper, we evaluate the effectiveness of previous shading algorithms in depicting shape of 3d objects. We perform a study in which people are shown an image of one of ten 3D objects shaded with one of eight styles and asked to orient a gauge to coincide with the surface normal at many positions on the object's surface. The normal estimates are compared with each other and with ground truth data provided by a registered 3D surface model to analyze accuracy and precision. Our experiments suggest that people interpret certain shape differently depending on shading of 3d object. This paper offers substantial evidence that current computer graphics shading algorithms can effectively depict shape of 3d objects where the algorithms have the properties of lots of tone steps and uniformly distributed tone steps. This type of analysis can guide the future development of new CG shading algorithms in computer graphics for the purpose of shape perception.

Motion of Stone Skipping Simulation by Physically-based Analysis (물리기반 해석을 통한 물수제비 운동 시뮬레이션)

  • Do, Joo-Young;Ra, Eun-Chul;Kim, Eun-Ju;Ryu, Kwan-Woo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.147-156
    • /
    • 2006
  • Physically-based simulation modeling is to simulate the real world by using physical laws such as Newton's second law of motion, while other modelings use only geometric Properties. In this paper, we present a real time simulation of stone skipping by using the physically-based modeling. We also describe interaction of a stone on the surface of water, and focus on calculating the path of the stone and the natural phenomena of water The path is decided by velocity of the stone and drag force from the water The motion is recalculated until the stone is immersing into the water surface. Our simulation provides a natural motion of stone skippings in real time. And the motion of stone skippings are generated by give interactive displays on the PC platforms. The techniques presented can easily be extended to simulate other interactive dynamics systems.

5D Light Field Synthesis from a Monocular Video (단안 비디오로부터의 5차원 라이트필드 비디오 합성)

  • Bae, Kyuho;Ivan, Andre;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.755-764
    • /
    • 2019
  • Currently commercially available light field cameras are difficult to acquire 5D light field video since it can only acquire the still images or high price of the device. In order to solve these problems, we propose a deep learning based method for synthesizing the light field video from monocular video. To solve the problem of obtaining the light field video training data, we use UnrealCV to acquire synthetic light field data by realistic rendering of 3D graphic scene and use it for training. The proposed deep running framework synthesizes the light field video with each sub-aperture image (SAI) of $9{\times}9$ from the input monocular video. The proposed network consists of a network for predicting the appearance flow from the input image converted to the luminance image, and a network for predicting the optical flow between the adjacent light field video frames obtained from the appearance flow.

Interactive 3D Visualization of Ceilometer Data (운고계 관측자료의 대화형 3차원 시각화)

  • Lee, Junhyeok;Ha, Wan Soo;Kim, Yong-Hyuk;Lee, Kang Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.2
    • /
    • pp.21-28
    • /
    • 2018
  • We present interactive methods for visualizing the cloud height data and the backscatter data collected from ceilometers in the three-dimensional virtual space. Because ceilometer data is high-dimensional, large-size data associated with both spatial and temporal information, it is highly improbable to exhibit the whole aspects of ceilometer data simply with static, two-dimensional images. Based on the three-dimensional rendering technology, our visualization methods allow the user to observe both the global variations and the local features of the three-dimensional representations of ceilometer data from various angles by interactively manipulating the timing and the view as desired. The cloud height data, coupled with the terrain data, is visualized as a realistic cloud animation in which many clouds are formed and dissipated over the terrain. The backscatter data is visualized as a three-dimensional terrain which effectively represents how the amount of backscatter changes according to the time and the altitude. Our system facilitates the multivariate analysis of ceilometer data by enabling the user to select the date to be examined, the level-of-detail of the terrain, and the additional data such as the planetary boundary layer height. We demonstrate the usefulness of our methods through various experiments with real ceilometer data collected from 93 sites scattered over the country.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF