• Title/Summary/Keyword: 동적 그래픽스

Search Result 55, Processing Time 0.025 seconds

Fast Light Source Estimation Technique for Effective Synthesis of Mixed Reality Scene (효과적인 혼합현실 장면 생성을 위한 고속의 광원 추정 기법)

  • Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2016
  • One of the fundamental elements in developing mixed reality applications is to effectively analyze and apply the environmental lighting information to image synthesis. In particular, interactive applications require to process dynamically varying lighting sources in real-time, reflecting them properly in rendering results. Previous related works are not often appropriate for this because they are usually designed to synthesize photorealistic images, generating too many, often exponentially increasing, light sources or having too heavy a computational complexity. In this paper, we present a fast light source estimation technique that aims to search for primary light sources on the fly from a sequence of video images taken by a camera equipped with a fisheye lens. In contrast to previous methods, our technique can adust the number of found light sources approximately to the size that a user specifies. Thus, it can be effectively used in Phong-illumination-model-based direct illumination or soft shadow generation through light sampling over area lights.

Controlling Particle Motion and Attribute Change by Fuzzy Control (퍼지제어에 의한 파티클 움직임 및 속성변화 제어)

  • Kang, Hwa-Seok;Choi, Seung-Hak;Eo, Kil-Su;Lee, Hong-Youl
    • Journal of the Korea Computer Graphics Society
    • /
    • v.2 no.1
    • /
    • pp.7-14
    • /
    • 1996
  • A particle system is defined as a collection of primitive particles that together represent irregular and ever-changing objects such as smoke, clouds, waterfalls, and explosions. A particle system can be a powerful tool for modeling a deformable object's motion and change of form since it has dynamic properties with time. As an object becomes more complicated and shows more chaotic behavior, however, we need much more parameters for describing its characteristics completely. Consequently, the conventional particle system leads to difficulty in managing all of the parameters properly since one parameter can affect the others. Moreover, motion equations for representing particles' behavior are usually approximated to gain speed-ups. The inevitable errors in calculating the equations can cause an unexpected outcome. In this paper, we present a new approach of applying fuzzy contol to mage particles' motion and attributes changes over time. We also give an implementation result of a fuzzy particle system to show the feasibility of the proposed method. Applications of the system to explosions, nebulae, volcanos, and grass are presented.

  • PDF

Sensor Fusion for Motion Capture System (모션 캡쳐 시스템을 위한 센서 퓨전)

  • Jeong, Il-Kwon;Park, ChanJong;Kim, Hyeong-Kyo;Wohn, KwangYun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.3
    • /
    • pp.9-15
    • /
    • 2000
  • We Propose a sensor fusion technique for motion capture system. In our system, two kinds of sensors are used for mutual assistance. Four magnetic sensors(markers) are attached on the upper arms and the back of the hands for assisting twelve optical sensors which are attached on the arms of a performer. The optical sensor information is not always complete because the optical markers can be hidden due to obstacles. In this case, magnetic sensor information is used to link discontinuous optical sensor information. We use a system identification techniques for modeling the relation between the sensors' signals. Dynamic systems are constructed from input-output data. We determine the best model from the set of candidate models using the canonical system identification techniques. Our approach is using a simple signal processing technique currently. In the future work, we will propose a new method using other signal processing techniques such as Wiener or Kalman filter.

  • PDF

Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face (동작 및 효정 동시 포착을 위한 데이터 기반 표정 복원에 관한 연구)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.3
    • /
    • pp.9-16
    • /
    • 2012
  • In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.

Computing Fast Secondary Skin Deformation of a 3D Character using GPU (GPU를 이용한 3차원 캐릭터의 빠른 2차 피부 변형 계산)

  • Kim, Jong-Hyuk;Choi, Jung-Ju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.55-62
    • /
    • 2012
  • This paper presents a new method to represent the secondary deformation effect using simple mass-spring simulation on the vertex shader of the GPU. For each skin vertex of a 3D character, a zero-length spring is connected to a virtual vertex that is to be rendered. When a skin vertex changes its position and velocity according to the character motion, the position of the corresponding virtual vertex is computed by mass-spring simulation in parallel on the GPU. The proposed method represents the secondary deformation effect very fast that shows the material property of a character skin during the animation. Applying the proposed technique dynamically can represent squash-and-stretch and follow-through effects which have been frequently shown in the traditional 2D animation, within a very small amount of additional computation. The proposed method is applicable to represent elastic skin deformation of a virtual character in an interactive animation environment such as games.

(<한국어 립씽크를 위한 3D 디자인 시스템 연구>)

  • Shin, Dong-Sun;Chung, Jin-Oh
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02b
    • /
    • pp.362-369
    • /
    • 2006
  • 3 차원 그래픽스에 적용하는 한국어 립씽크 합성 체계를 연구하여, 말소리에 대응하는 자연스러운 립씽크를 자동적으로 생성하도록 하는 디자인 시스템을 연구 개발하였다. 페이셜애니메이션은 크게 나누어 감정 표현, 즉 표정의 애니메이션과 대화 시 입술 모양의 변화를 중심으로 하는 대화 애니메이션 부분으로 구분할 수 있다. 표정 애니메이션의 경우 약간의 문화적 차이를 제외한다면 거의 세계 공통의 보편적인 요소들로 이루어지는 반면 대화 애니메이션의 경우는 언어에 따른 차이를 고려해야 한다. 이와 같은 문제로 인해 영어권 및 일본어 권에서 제안되는 음성에 따른 립싱크 합성방법을 한국어에 그대로 적용하면 청각 정보와 시각 정보의 부조화로 인해 지각의 왜곡을 일으킬 수 있다. 본 연구에서는 이와 같은 문제점을 해결하기 위해 표기된 텍스트를 한국어 발음열로 변환, HMM 알고리듬을 이용한 입력 음성의 시분할, 한국어 음소에 따른 얼굴특징점의 3 차원 움직임을 정의하는 과정을 거쳐 텍스트와 음성를 통해 3 차원 대화 애니메이션을 생성하는 한국어 립싱크합성 시스템을 개발 실제 캐릭터 디자인과정에 적용하도록 하였다. 또한 본 연구는 즉시 적용이 가능한 3 차원 캐릭터 애니메이션뿐만 아니라 아바타를 활용한 동적 인터페이스의 요소기술로서 사용될 수 있는 선행연구이기도 하다. 즉 3 차원 그래픽스 기술을 활용하는 영상디자인 분야와 HCI 에 적용할 수 있는 양면적 특성을 지니고 있다. 휴먼 커뮤니케이션은 언어적 대화 커뮤니케이션과 시각적 표정 커뮤니케이션으로 이루어진다. 즉 페이셜애니메이션의 적용은 보다 인간적인 휴먼 커뮤니케이션의 양상을 지니고 있다. 결국 인간적인 상호작용성이 강조되고, 보다 편한 인간적 대화 방식의 휴먼 인터페이스로 그 미래적 양상이 변화할 것으로 예측되는 아바타를 활용한 인터페이스 디자인과 가상현실 분야에 보다 폭넓게 활용될 수 있다.

  • PDF

3D Virtual Reality Game with Deep Learning-based Hand Gesture Recognition (딥러닝 기반 손 제스처 인식을 통한 3D 가상현실 게임)

  • Lee, Byeong-Hee;Oh, Dong-Han;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.5
    • /
    • pp.41-48
    • /
    • 2018
  • The most natural way to increase immersion and provide free interaction in a virtual environment is to provide a gesture interface using the user's hand. However, most studies about hand gesture recognition require specialized sensors or equipment, or show low recognition rates. This paper proposes a three-dimensional DenseNet Convolutional Neural Network that enables recognition of hand gestures with no sensors or equipment other than an RGB camera for hand gesture input and introduces a virtual reality game based on it. Experimental results on 4 static hand gestures and 6 dynamic hand gestures showed that they could be used as real-time user interfaces for virtual reality games with an average recognition rate of 94.2% at 50ms. Results of this research can be used as a hand gesture interface not only for games but also for education, medicine, and shopping.

Preserving and Breakup for the Detailed Representation of Liquid Sheets in Particle-Based Fluid Simulations (입자 기반 유체 시뮬레이션에서 디테일한 액체 시트를 표현하기 위한 보존과 분해 기법)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.1
    • /
    • pp.13-22
    • /
    • 2019
  • In this paper, we propose a new method to improve the details of the fluid surface by removing liquid sheets that are over-preserved in particle-based water simulation. A variety of anisotropic approaches have been proposed to address the surface noise problem, one of the chronic problems in particle-based fluid simulation. However, a method of stably expressing the preservation and breakup of the liquid sheet has not been proposed. We propose a new framework that can dynamically add and remove the water particles based on anisotropic kernel and density to simultaneously represent two features of liquid sheet preservation and breakup in particle-based fluid simulations. The proposed technique well represented the characteristics of a fluid sheet that was breakup by removing the excessively preserved liquid sheet in a particle-based fluid simulation approach. As a result, the quality of the liquid sheet was improved without noise.

Isosurface Component Tracking and Visualization in Time-Varying Volumetric Data (시변 볼륨 데이터에서의 등위면 콤포넌트 추적 및 시각화)

  • Sohn, Bong-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.225-231
    • /
    • 2009
  • This paper describes a new algorithm to compute and track the deformation of an isosurface component defined in a time-varying volumetric data. Isosurface visualization is one of the most common method for effective visualization of volumetric data. However, most isosurface visualization algorithms have been developed for static volumetric data. As imaging and simulation techniques are developed, large time-varying volumetric data are increasingly generated. Hence, development of time-varying isosurface visualization that utilizes dynamic properties of time-varying data becomes necessary. First, we define temporal correspondence between isosurface components of two consecutive timesteps. Based on the definition, we perform an algorithm that tracks the deformation of an isosurface component that can be selected using the Contour Tree. By repeating this process for entire timesteps, we can effectively visualize the time-varying data by displaying the dynamic deformation of the selected isosurface component.

Research on Virtual Simulator Sickness Using Field of View Restrictor According to Human Factor levels (FOV Restrictor를 활용한 가상 멀미 저감 요소 기술연구)

  • Kim, Chang-seop;Kim, So-Yeon;Kim, Kwanguk
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.49-59
    • /
    • 2018
  • Simulator sickness is one of the important side effect of virtual reality. Simulator sickness is influenced by various factors, and field of view (FOV) is one of them. The FOV is a viewing angle limited by the screen, and when the FOV is reduced, the simulator sickness is reduced, and the presence is lowered. Previous study developed a Dynamic FOV Restrictor (Center-fixed FOV Restrictor) to reduce simulator sickness while maintaining presence. It is a method that limits the FOV dynamically by reflecting the speed and angular velocity of the avatar. We also developed Eye-tracking Based Dynamic FOV Restrictor (Eye-tracking FOV Restrictor) by adding head rotations and eye movements. This study attempts to compare the simulator sickness and the presence of the No FOV Restrictor condition, the Center-fixed FOV Restrictor condition, and the Eye-tracking FOV Restrictor condition. The results showed that the simulator sickness of the Center-fixed FOV Restrictor condition is significantly lower than other two conditions. The results also showed that there were no significant differences in presence in three conditions. The interpretations and limitations of this study are discussed in this paper.