• Title/Summary/Keyword: 실시간 그래픽스

Search Result 233, Processing Time 0.024 seconds

Stable Mass-Spring Model for Real-time Animation of Flexible Objects (비정형 물체의 실시간 애니메이션을 위한 안정적 질량-스프링 모델)

  • Gang, Yeong-Min;Jo, Hwan-Gyu;Park, Chan-Jong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.5 no.1
    • /
    • pp.27-33
    • /
    • 1999
  • In this paper, we propose an efficient technique for the animation of flexible thin objects. Mass-spring model was employed to represent the flexible objects. Till now, many techniques have used the mass-spring model to generate plausible animation of soft objects. A straight-forward approach to the animation with mass-spring model is explicit Euler method, but the explicit Euler method has serious disadvantage that it suffers from 'instability problem'. The implicit integration method is a possible solution to overcome the instability problem. However, the most critical flaw of the implicit method is that it involves a large linear system. This paper presents a fast animation technique for mass-spring model with approximated implicit method. The proposed technique stably updates the state of n mass-points in O(n) time when the number of total springs are O(n). We also consider the interaction of the flexible object and air in order to generate plausible results.

  • PDF

Processing Techniques for Non-photorealistic Contents Rendering in Mobile Devices (모바일 기기에서의 비실사적 콘텐츠 렌더링을 위한 프로세싱 기법)

  • Jeon, Jae-Woong;Jang, Hyun-Ho;Choy, Yoon-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.119-129
    • /
    • 2010
  • Recently, development of mobile service and increased demand for mobile device make mobile environment noticeable in computer graphics. Especially demand for 3D graphic services in mobile devices is steadily increased. However up to the present non-photorealistic rendering is mainly studied in desktop platform. In the result, existing research were designed for desktop computers and are not well-suited for mobile devices. Thus, there is a growing needs for processing techniques that provide the ability to render 3D non-photorealistic graphics through mobile devices. In this paper, we discuss processing techniques for non-photorealistic rendering that are especially cartoon shading and rendering in mobile devices. Through the result of this research, it is expected that silhouette edge rendering for mobile display environment and preprocessing file technique for shading. The efficiency of 3D mobile graphic service like 3D model in cartoon style is increased by using proposed preprocessing file and rendering pipeline. Our work can provide mobile cartoon rendering results and various mobile contents to users.

Fingertip Detection through Atrous Convolution and Grad-CAM (Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지)

  • Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.11-20
    • /
    • 2019
  • With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

Mobile Volume Rendering System for Client-Server Environment (클라이언트 서버 기반 모바일 볼륨 가시화 시스템)

  • Lee, Woongkyu;Kye, Heewon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.17-26
    • /
    • 2015
  • In this paper, we explain a volume rendering system for client-server environment. A single GPU-equipped PC works as a server which is based on the ideas that only a few concurrent users use a volume rendering system in a small hospital. As the clients, we used Android mobile devices such as smart phones. User events are transformed to rendering requests by the client application. When the server receives a rendering request, it renders the volume using the GPU. The rendered image is compressed to JPEG or PNG format so that we can save network bandwidth and reduce transfer time. In addition, we perform an event pruning method while a user is dragging the touch to enhance latency. The server compensates the pruning by interpolating the touch positions. As the result, real-time volume rendering is possible for 5 concurrent users on single GPU-equipped commodity hardware.

Computing Fast Secondary Skin Deformation of a 3D Character using GPU (GPU를 이용한 3차원 캐릭터의 빠른 2차 피부 변형 계산)

  • Kim, Jong-Hyuk;Choi, Jung-Ju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.55-62
    • /
    • 2012
  • This paper presents a new method to represent the secondary deformation effect using simple mass-spring simulation on the vertex shader of the GPU. For each skin vertex of a 3D character, a zero-length spring is connected to a virtual vertex that is to be rendered. When a skin vertex changes its position and velocity according to the character motion, the position of the corresponding virtual vertex is computed by mass-spring simulation in parallel on the GPU. The proposed method represents the secondary deformation effect very fast that shows the material property of a character skin during the animation. Applying the proposed technique dynamically can represent squash-and-stretch and follow-through effects which have been frequently shown in the traditional 2D animation, within a very small amount of additional computation. The proposed method is applicable to represent elastic skin deformation of a virtual character in an interactive animation environment such as games.

Multi-scale Texture Synthesis (다중 스케일 텍스처 합성)

  • Lee, Sung-Ho;Park, Han-Wook;Lee, Jung;Kim, Chang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.2
    • /
    • pp.19-25
    • /
    • 2008
  • We synthesize a texture with different structures at different scales. Our technique is based on deterministic parallel synthesis allowing real-time processing on a GPU. A new coordinate transformation operator is used to construct a synthesized coordinate map based on different exemplars at different scales. The runtime overhead is minimal because this operator can be precalculated as a small lookup table. Our technique is effective for upsampling texture-rich images, because the result preserves texture detail well. In addition, a user can design a texture by coloring a low-resolution control image. This design tool can also be used for the interactive synthesis of terrain in the style of a particular exemplar, using the familiar 'raise and lower' airbrush to specify elevation.

  • PDF

Automatic Virtual Camera Control Using Motion Area (모션 면적을 이용한 버추얼 카메라의 자동 제어 기법)

  • Kwon, Ji-Yong;Lee, In-Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.2
    • /
    • pp.9-17
    • /
    • 2008
  • We propose a method to determine camera parameters for character motion, which confiders the motion by itself. The basic idea is to approximately compute the area swept by the motion of the character's links that are orthogonally projected onto the image plane, which we call "Motion Area". Using the motion area, we can determine good fixed camera parameters and camera paths for a given character motion in the off-line or real-time camera control. In our experimental results, we demonstrate that our camera path generation algorithms can compute a smooth moving camera path while the camera effectively displays the dynamic features of character motion. Our methods can be easily used in combination with the method for generating occlusion-free camera paths. We expect that our methods can also be utilized by the general camera planning method as one of heuristics for measuring the visual quality of the scenes that include dynamically moving characters.

  • PDF

Processing Methods for Ink-and-Wash Painting in Mobile Contents (모바일 콘텐츠의 수묵 담채 렌더링을 위한 프로세싱 기법)

  • Jang, Hyun-Ho;Jeon, Jae-Woong;Choy, Yoon-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.3
    • /
    • pp.137-146
    • /
    • 2011
  • Development of mobile devices such as smart phones and tablet PC and increased usage for mobile contents make researches of mobile computer graphics noticeable. However, previous non-photorealistic renderings such as an ink-and-wash painting with thin colors are almost designed for desktop platform and not well-matched for mobile devices. In the result, mobile-specific rendering techniques are needed to create 3D mobile contents with non-photorealistic graphics. We introduce processing techniques that are especially ink-and-wash painting and oriental thin coloring in mobile devices. Through the result of this paper, it is expected that various 3D mobile contents with non-photorealistic styles are made. Proposed work also can allow mobile devices render it in realtime using proposed preprocessing techniques and rendering pipelines.

Vehicle Crash Simulation using Trajectory Optimization (경로 최적화 알고리즘을 이용한 3차원 차량 충돌 시뮬레이션)

  • Seong, Jin-Wook;Ko, Seung-Wook;Kwon, Tae-Soo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.5
    • /
    • pp.11-19
    • /
    • 2015
  • Our research introduces a novel system for creating 3D vehicle animation. Our system is for intuitively authoring vehicle accident scenes according to videos or based on user-drawn trajectories. Our system has been implemented by combining three existing ideas. The first part is for obtaining 3D trajectory of a vehicle from black-box videos. The second part is a tracking algorithm that controls a vehicle to follow a given trajectory with small errors. The last part optimizes the vehicle control parameters so that the error between the input trajectory and simulated vehicle trajectory is minimized. We also simulate the deformation of the car due to an impact to achieve believable results in real-time.

Realistic Keyboard Typing Motion Generation Based on Physics Simulation (물리 시뮬레이션에 기반한 사실적인 키보드 타이핑 모션 생성)

  • Jang, Yongho;Eom, Haegwang;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.5
    • /
    • pp.29-36
    • /
    • 2015
  • Human fingers are essential parts of the body that perform complex and detailed motion. Expression of natural finger motion is one of the most important issues in character animation research. Especially, keyboard typing animation is hard to create through the existing animation pipeline because the keyboard typing typically requires a high level of dexterous motion that involves the movement of various joints in a natural way. In this paper, we suggest a method for the generation of realistic keyboard typing motion based on physics simulation. To generate typing motion properly using physics-based simulation, the hand and the keyboard models should be positioned in an allowed range of simulation space, and the typing has to occur at a precise key location according to the input signal. Based on the observation, we incorporate natural tendency that accompanies actual keyboard typing. For example, we found out that the positions of the hands and fingers always assume the default pose, and the idle fingers tend to minimize their motion. We handle these various constraints in one solver to achieve the results of real-time natural keyboard typing simulation. These results can be employed in various animation and virtual reality applications.