• Title/Summary/Keyword: Realistic Rendering

Search Result 184, Processing Time 0.023 seconds

A Study on Real-time Graphic Workflow For Achieving The Photorealistic Virtual Influencer

  • Haitao Jiang
    • International journal of advanced smart convergence
    • /
    • v.12 no.1
    • /
    • pp.130-139
    • /
    • 2023
  • With the increasing popularity of computer-generated virtual influencers, the trend is rising especially on social media. Famous virtual influencer characters Lil Miquela and Imma were all created by CGI graphics workflows. The process is typically a linear affair. Iteration is challenging and costly. Development efforts are frequently siloed off from one another. Moreover, it does not provide a real-time interactive experience. In the previous study, a real-time graphic workflow was proposed for the Digital Actor Hologram project while the output graphic quality is less than the results obtained from the CGI graphic workflow. Therefore, a real-time engine graphic workflow for Virtual Influencers is proposed in this paper to facilitate the creation of real-time interactive functions and realistic graphic quality. The real-time graphic workflow is obtained from four processes: Facial Modeling, Facial Texture, Material Shader, and Look-Development. The analysis of performance with real-time graphical workflow for Digital Actor Hologram demonstrates the usefulness of this research result. Our research will be efficient in producing virtual influencers.

The Framework of Realistic Fabric Rendering Based on Measurement (실측 기반의 사실적 옷감 렌더링 프레임워크)

  • Nam, Hyeongil;Sim, Kyudong;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.192-195
    • /
    • 2020
  • 실제 원단의 소재를 반영한 렌더링은 의류 디자인 단계에서 완성된 옷을 미리 파악하는 좋은 수단 증 하나이다. 본 논문에서는 오픈소스 기반의 원단 렌더링 방법과 실제 원단 재질을 측정하는 장치를 이용하는 실측으로부터 렌더링까지의 프레임워크를 제안한다. 옷감의 재질을 측정하고 렌더링 하는 방법은 두 과정에서 공통된 특정을 파라미터화하여 측정하고 렌더링에 반영하는 것이 중요하다. 본 논문에서는 렌더링 방법으로 Ray-Tracing이 가능하고 적절한 컴퓨팅 성능을 사용하면서 최적의 렌더링 결과를 얻을 수 있는 nvidia의 오픈소스인 visRTX를 사용하였다. 또한 원단 재질 측정 장치로 렌더링에 반영되는 파라미터인 고해상도 diffuse map과 normal map을 측정하여 렌더링에 반영하였다. 본 논문에서 제안하는 원단 재질을 측정하고 렌더링하는 프레임워크를 통해서 옷을 디자인하연서 확인할 수 있는 실사 렌더링 결과물을 제공하고 이를 통해 의상 디자인 업계에 큰 도움이 될 것으로 기대된다.

  • PDF

Trends and Prospects in Super-realistic Metaverse Visualization Technologies (초실감 메타버스 시각화 기술 동향과 전망)

  • W.S. Youm;C.W. Byun;C.M. Kang;K.J. Kim;Y.D. Kim;D.H. Ahn
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.2
    • /
    • pp.24-32
    • /
    • 2024
  • Wearable metaverse devices have sparked enthusiasm as innovative virtual computing user interfaces by addressing a major source of user discomfort, namely, motion-to-photon latency. This kind of latency occurs between the user motion and screen update. To enhance the realism and immersion of experiences using metaverse devices, the vergence-accommodation conflict in stereoscopic image representation must be resolved. Ongoing research aims to address current challenges by adopting vari-focal, multifocal, and light field display technologies for stereoscopic imaging. We explore current trends in research with emphasis on multifocal stereoscopic imaging. Successful metaverse visualization services require the integration of stereoscopic image rendering modules and content encoding/decoding technologies tailored to these services. Additionally, real-time video processing is essential for these modules to correctly and timely process such content and implement metaverse visualization services.

A Study of 3D Sound Modeling based on Geometric Acoustics Techniques for Virtual Reality (가상현실 환경에서 기하학적 음향 기술 기반의 3차원 사운드 모델링 기술에 관한 연구)

  • Kim, Cheong Ghil
    • Journal of Satellite, Information and Communications
    • /
    • v.11 no.4
    • /
    • pp.102-106
    • /
    • 2016
  • With the popularity of smart phones and the help of high-speed wireless communication technology, high-quality multimedia contents have become common in mobile devices. Especially, the release of Oculus Rift opens a new era of virtual reality technology in consumer market. At the same time, 3D audio technology which is currently used to make computer games more realistic will soon be applied to the next generation of mobile phone and expected to offer a more expansive experience than its visual counterpart. This paper surveys concepts, algorithms, and systems for modeling 3D sound virtual environment applications. To do this, we first introduce an important design principle for audio rendering based on physics-based geometric algorithms and multichannel technologies, and introduce an audio rendering pipeline to a scene graph-based virtual reality system and a hardware architecture to model sound propagation.

Generation of an eye-contacted view using color and depth cameras (컬러와 깊이 카메라를 이용한 시점 일치 영상 생성 기법)

  • Hyun, Jee-Ho;Han, Jae-Young;Won, Jong-Pil;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.8
    • /
    • pp.1642-1652
    • /
    • 2012
  • Generally, a camera isn't located at the center of display in a tele-presence system and it causes an incorrect eye contact between speakers which reduce the realistic feeling during the conversation. To solve this incorrect eye contact problem, we newly propose an intermediate view reconstruction algorithm using both a color camera and a depth camera and applying for the depth image based rendering (DIBR) algorithm. In the proposed algorithm, an efficient hole filling method using the arithmetic mean value of neighbor pixels and an efficient boundary noise removal method by expanding the edge region of depth image are included. We show that the generated eye-contacted image has good quality through experiments.

Multiple TIP Images Blending for Wide Virtual Environment (넓은 가상환경 구축을 위한 다수의 TIP (Tour into the Picture) 영상 합성)

  • Roh, Chang-Hyun;Lee, Wan-Bok;Ryu, Dae-Hyun;Kang, Jung-Jin
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.1
    • /
    • pp.61-68
    • /
    • 2005
  • Image-based rendering is an approach to generate realistic images in real-time without modeling explicit 3D geometry. Especially, owing to its simplicity, TIP(Tour Into the Picture) is preferred to constructing a 3D background scene. Because existing TIP methods have a limitation in that they lack geometrical information, we can not expect a accurate scene if the viewpoint is far from the origin of the TIP. In this paper, we propose the method of constructing a virtual environment of a wide area by blending multiple TIP images. Firstly, we construct multiple TIP models of the virtual environment. Then we interpolate foreground and background objects respectively, to generate a smooth navigation image. The method proposed here can be applied to various industry applications, such as computer game, 3D car navigation, and so on.

Haptic Rendering Technology for Touchable Video (만질 수 있는 비디오를 위한 햅틱 렌더링 기술)

  • Lee, Hwan-Mun;Kim, Ki-Kwon;Sung, Mee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.691-701
    • /
    • 2010
  • We propose a haptic rendering technology for touchable video. Our touchable video technique allows users for feeling the sense of touch while probing directly on 2D objects in video scenes or manipulating 3D objects brought out from video scenes using haptic devices. In our technique, a server sends video and haptic data as well as the information of 3D model objects. The clients receive video and haptic data from the server and render 3D models. A video scene is divided into small grids, and each cell has its tactile information which corresponds to a specific combination of four attributes: stiffness, damping, static friction, and dynamic friction. Users can feel the sense of touch when they touch directly cells of a scene using a haptic device. Users can also examine objects by touching or manipulating them after bringing out the corresponding 3D objects from the screen. Our touchable video technique proposed in this paper can lead us to feel maximum satisfaction the haptic-audio-vidual effects directly on the video scenes of movies or home-shopping video contents.

The Development of Authoring Tool for 3D Virtual Space Based on a Virtual Space Map (가상공간지도 기반의 3차원 가상공간 저작도구의 개발)

  • Jung Il-Hong;Kim Eun-Ji
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.177-186
    • /
    • 2006
  • This paper presents the development of a certain highly efficient authoring tool for constructing realistic 3D virtual space using image-based rendering techniques based on a virtual space map. Unlike conventional techniques such as TIP, for constructing a small 3D virtual space using single image, the authoring tool developed herein produces a wide 3D virtual space using multiple images. This tool is designed for constructing each small 3D virtual space for each input image, and for interconnecting these 3D virtual spaces into a wide 3D virtual space using a virtual space map. The map consists of three elements such as specific room, link point and passageway, and three directions. It contains various information such as the connection structure, the navigation information and so on. Also, the tool contains a user interface that let users construct the wide 3D virtual space easily.

  • PDF

Improved Progressive Photon Mapping Using Photon Probing (포톤 탐사법을 이용한 개선된 점진적 포톤 매핑)

  • Lee, Sang-Gil;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.3
    • /
    • pp.41-48
    • /
    • 2010
  • Photon mapping is a traditional global illumination method using many photons emitted from the light source for photo-realistic rendering. However, this method needs a lot of resources to perform tracing of millions of photons. Progressive photon mapping solves this problem. Typical progressive photon mapping performs ray tracing at first to find the hit points on diffuse surface of objects. Next, light source repeatedly emits a small number of photons in photon tracing pass, and power of photons in each sphere that has a fixed radius with the hit points in the center is accumulated. This method requires less resources than previous photon mapping, but it spends much time for gathering enough photons since each of photons progresses through a random direction and rendering high quality image. To improve the method, we propose photon probing that calculates variance of photons in the sphere and controls radius of sphere. In addition, we apply cone filter in radiance estimation step for reducing aliasing at the edges in result image.

Real-Time Simulation of Single and Multiple Scattering of Light (빛의 단일 산란과 다중 산란의 실시간 시뮬레이션 기법)

  • Ki, Hyun-Woo;Lyu, Ji-Hye;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.7 no.2
    • /
    • pp.21-32
    • /
    • 2007
  • It is significant to simulate scattering of light within media for realistic image synthesis; however, this requires costly computation. This paper introduces a practical image-space approximation technique for interactive subsurface scattering. We use a general two-pass approach, which creates transmitted irradiance samples onto shadow maps and computes illumination using the shadow maps. We estimate single scattering efficiently using a method similar to common shadow mapping with adaptive deterministic sampling. A hierarchical technique is applied to evaluate multiple scattering, based on a diffusion theory. We further accelerate rendering speed by tabulating complex functions and utilizing level of detail. We demonstrate that our technique produces high-quality images of animated scenes with blurred shadow at hundreds frames per second on graphics hardware. It can be integrated into existing interactive systems easily.

  • PDF