• Title/Summary/Keyword: 실시간 그래픽스

Search Result 228, Processing Time 0.026 seconds

Adaptive Mass-Spring Method for the Synchronization of Dual Deformable Model (듀얼 가변형 모델 동기화를 위한 적응성 질량-스프링 기법)

  • Cho, Jae-Hwan;Park, Jin-Ah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.3
    • /
    • pp.1-9
    • /
    • 2009
  • Traditional computer simulation uses only traditional input and output devices. With the recent emergence of haptic techniques, which can give users kinetic and tactile feedback, the field of computer simulation is diversifying. In particular, as the virtual-reality-based surgical simulation has been recognized as an effective training tool in medical education, the practical virtual simulation of surgery becomes a stimulating new research area. The surgical simulation framework should represent the realistic properties of human organ for the high immersion of a user interaction with a virtual object. The framework should make proper both haptic and visual feedback for high immersed virtual environment. However, one model may not be suitable to simulate both haptic and visual feedback because the perceptive channels of two feedbacks are different from each other and the system requirements are also different. Therefore, we separated two models to simulate haptic and visual feedback independently but at the same time. We propose an adaptive mass-spring method as a multi-modal simulation technique to synchronize those two separated models and present a framework for a dual model of simulation that can realistically simulate the behavior of the soft, pliable human body, along with haptic feedback from the user's interaction.

  • PDF

Multi-screen Content Creation using Rig and Monitoring System (다면 콘텐츠 현장 촬영 시스템)

  • Lee, Sangwoo;Kim, Younghui;Cha, Seunghoon;Kwon, Jaehwan;Koh, Haejeong;Park, Kisu;Song, Isaac;Yoon, Hyungjin;Jang, Kyungyoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.5
    • /
    • pp.9-17
    • /
    • 2017
  • Filming using multiple cameras is required for the production of the multi-screen content. It can fill the viewer's field of view (FOV) entirely to provide an increased sense of immersion. In such a filming scenario, it is very important to monitor how images captured by multiple cameras are displayed as a single content or how the content will be displayed in an actual theatre. Most recent studies on creating the content of special format have been focused on their own purposes, such as stereoscopic and panoramic images. There is no research on content creation optimized for theatres that use three screens that are spreading recently. In this paper, we propose a novel content production system with a rig that can control three cameras and monitoring software specialized for multi-screen content. The proposed rig can precisely control the angles between the cameras and capture wide angle of view with three cameras. It works with monitoring software via remote communication. The monitoring software automatically aligned the content in real time, and the alignment of the content is updated according to the angle of camera rig. Futher, the producion efficiency is greatly improved by making the alignment information available for post-production.

Effective Volume Rendering and Virtual Staining Framework for Visualizing 3D Cell Image Data (3차원 세포 영상 데이터의 효과적인 볼륨 렌더링 및 가상 염색 프레임워크)

  • Kim, Taeho;Park, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.9-16
    • /
    • 2018
  • In this paper, we introduce a visualization framework for cell image data obtained from optical diffraction tomography (ODT), including a method for representing cell morphology in 3D virtual environment and a color mapping protocol. Unlike commonly known volume data sets, such as CT images of human organ or industrial machinery, that have solid structural information, the cell image data have rather vague information with much morphological variations on the boundaries. Therefore, it is difficult to come up with consistent representation of cell structure for visualization results. To obtain desired visual representation of cellular structures, we propose an interactive visualization technique for the ODT data. In visualization of 3D shape of the cell, we adopt a volume rendering technique which is generally applied to volume data visualization and improve the quality of volume rendering result by using empty space jittering method. Furthermore, we provide a layer-based independent rendering method for multiple transfer functions to represent two or more cellular structures in unified render window. In the experiment, we examined effectiveness of proposed method by visualizing various type of the cell obtained from the microscope which can capture ODT image and fluorescence image together.

The Development of Real-time Video Associated Data Service System for T-DMB (T-DMB 실시간 비디오 부가데이터 서비스 시스템 개발)

  • Kim Sang-Hun;Kwak Chun-Sub;Kim Man-Sik
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.474-487
    • /
    • 2005
  • T-DMB (Terrestrial-Digital Multimedia Broadcasting) adopted MPEG-4 BIFS (Binary Format for Scene) Core2D scene description profile and graphics profile as the standard of video associated data service. By using BIFS, we can support to overlay objects, i.e. text, stationary image, circle, polygon, etc., on the main display of receiving end according to the properties designated in broadcasting side and to make clickable buttons and website links on desired objects. Therefore, a variety of interactive data services can be served by BIFS. In this paper, we implement real-time video associated data service system far T-DMB. Our developing system places emphasis on real-time data service by user operation and on inter-working and stability with our previously developed video encoder. Our system consists of BIFS Real-time System, Automatic Stream Control System and Receiving Monitoring System. Basic functions of our system are designed to reflect T-DMB programs and characteristics of program production environment as a top priority. Our developed system was used in BIFS trial service via KBS T-DMB, it is supposed to be used in T-DMB main service after improvement process such as intensifying system stability.

Camera and Receiver Development for 3D HDTV Broadcasting (3차원 고화질TV 방송용 카메라 및 수신기 개발)

  • 이광순;허남호;안충현
    • Journal of Broadcast Engineering
    • /
    • v.7 no.3
    • /
    • pp.211-218
    • /
    • 2002
  • This paper introduces the HD 3DTV camera and 3DTV receiver that are compatible with the ATSC HDTV broadcasting system. The developed 3DTV camera is based on stereoscopic techniques, and it has control function to control both left and right zoom lens simultaneously and to control the vergence. Moreover, in order to control the vergence manually and to eliminate the synchronization problem of the both images, the 3DTV camera has the 3DTV video multiplexing function to combine the left and right images into the single image. The developed 3DTV signal, and it has the various analog/digital interfaces. The performance of the developed system is confirmed by shooting the selected soccer game in 2002 FIFA KOREA/JAPANTM World Cup and by broadcasting the match. The HD 3DTV camera and receiver will be applied to the 3DTV industries such as 3D movie, 3D game, 3D image processing, 3DTV broadcasting system, and so on.

Interactive 3D Visualization of Ceilometer Data (운고계 관측자료의 대화형 3차원 시각화)

  • Lee, Junhyeok;Ha, Wan Soo;Kim, Yong-Hyuk;Lee, Kang Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.2
    • /
    • pp.21-28
    • /
    • 2018
  • We present interactive methods for visualizing the cloud height data and the backscatter data collected from ceilometers in the three-dimensional virtual space. Because ceilometer data is high-dimensional, large-size data associated with both spatial and temporal information, it is highly improbable to exhibit the whole aspects of ceilometer data simply with static, two-dimensional images. Based on the three-dimensional rendering technology, our visualization methods allow the user to observe both the global variations and the local features of the three-dimensional representations of ceilometer data from various angles by interactively manipulating the timing and the view as desired. The cloud height data, coupled with the terrain data, is visualized as a realistic cloud animation in which many clouds are formed and dissipated over the terrain. The backscatter data is visualized as a three-dimensional terrain which effectively represents how the amount of backscatter changes according to the time and the altitude. Our system facilitates the multivariate analysis of ceilometer data by enabling the user to select the date to be examined, the level-of-detail of the terrain, and the additional data such as the planetary boundary layer height. We demonstrate the usefulness of our methods through various experiments with real ceilometer data collected from 93 sites scattered over the country.

Accelerating GPU-based Volume Ray-casting Using Brick Vertex (브릭 정점을 이용한 GPU 기반 볼륨 광선투사법 가속화)

  • Chae, Su-Pyeong;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.3
    • /
    • pp.1-7
    • /
    • 2011
  • Recently, various researches have been proposed to accelerate GPU-based volume ray-casting. However, those researches may cause several problems such as bottleneck of data transmission between CPU and GPU, requirement of additional video memory for hierarchical structure and increase of processing time whenever opacity transfer function changes. In this paper, we propose an efficient GPU-based empty space skipping technique to solve these problems. We store maximum density in a brick of volume dataset on a vertex element. Then we delete vertices regarded as transparent one by opacity transfer function in geometry shader. Remaining vertices are used to generate bounding boxes of non-transparent area that helps the ray to traverse efficiently. Although these vertices are independent on viewing condition they need to be reproduced when opacity transfer function changes. Our technique provides fast generation of opaque vertices for interactive processing since the generation stage of the opaque vertices is running in GPU pipeline. The rendering results of our algorithm are identical to the that of general GPU ray-casting, but the performance can be up to more than 10 times faster.

Character Motion Control by Using Limited Sensors and Animation Data (제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어)

  • Bae, Tae Sung;Lee, Eun Ji;Kim, Ha Eun;Park, Minji;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.

A Study on The Metaverse Content Production Pipeline using ZEPETO World (제페토 월드를 활용한 메타버스 콘텐츠 제작 공정에 관한 연구)

  • Park, MyeongSeok;Cho, Yunsik;Cho, Dasom;Na, Giri;Lee, Jamin;Cho, Sae-Hong;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.91-100
    • /
    • 2022
  • This study proposes the metaverse content production pipeline using ZEPETO World, one of the representative metaverse platforms in Korea. Based on the Unity 3D engine, the ZEPETO world is configured using the ZEPETO template, and the core functions of the metaverse content that enable multi-user participation such as logic, interaction, and property control are implemented through the ZEPETO script. This study utilizes the basic functions such as properties, events, and components of the ZEPETO script as well as the ZEPETO player which includes avatar loading, character movement, and camera control functions. In addition, based on ZEPETO's properties such as World Multiplayer and Client Starter, it summarizes the core synchronization process required for multiplay metaverse content production, such as object transformation, dynamic object creation, property addition, and real-time property control. Based on this, we check the proposed production pipeline by directly producing multiplay metaverse content using ZEPETO World.

Application of Immersive Virtual Environment Through Virtual Avatar Based On Rigid-body Tracking (강체 추적 기반의 가상 아바타를 통한 몰입형 가상환경 응용)

  • MyeongSeok Park;Jinmo Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.69-77
    • /
    • 2023
  • This study proposes a rigid-body tracking based virtual avatar application method to increase the social presence and provide various experiences of virtual reality(VR) users in an immersive virtual environment. The proposed method estimates the motion of a virtual avatar through inverse kinematics based on real-time rigid-body tracking based on motion capture using markers. Through this, it aims to design a highly immersive virtual environment with simple object manipulation in the real world. Science experiment educational contents are produced to experiment and analyze applications related to immersive virtual environments through virtual avatars. In addition, audiovisual education, full-body tracking, and the proposed rigid-body tracking method were compared and analyzed through survey. In the proposed virtual environment, participants wore VR HMDs and conducted a survey to confirm immersion and educational effects from virtual avatars performing experimental educational actions from estimated motions. As a result, through the method of utilizing virtual avatars based on rigid-body tracking, it was possible to induce higher immersion and educational effects than traditional audiovisual education. In addition, it was confirmed that a sufficiently positive experience can be provided without much work for full-body tracking.