• Title/Summary/Keyword: 한국컴퓨터그래픽스

Search Result 948, Processing Time 0.03 seconds

A Study of 'Hear Me Later' VR Content Production to Improve the Perception of the Visually-Impaired (시각 장애인에 대한 인식 개선을 위한 'Hear me later' VR 콘텐츠 제작 연구)

  • Kang, YeWon;Cho, WonA;Hong, SeungA;Lee, KiHan;Ko, Hyeyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.99-109
    • /
    • 2020
  • This study was conducted to improve the education method for improving perception awareness of the visually-impaired. 'Hear me later' was designed and implemented based on VR content that allows the visually-impaired experience in the eyes and environment. The main target is from middle and high school students to adolescents in their twenties. It is consisted of a student, the user's daily life with waking up at home in the morning, going to school, taking classes at school, and disembarking home late in the dark. In addition, 10 quests are placed on each map to induce users' participation and activity. These quests are a daily activity for non-disabled people, but it is an activity to experience uncomfortable activity for visually impaired people. In order to verify the effect of 'Hear me later', 8 participants in their early teens to early 20s' perception of visually impaired people was measured through pre and post evaluation of VR contents experience. In order to verify the effect of'Hear me later', 8 participants in their early teens to early 20s' perception of visually impaired people was measured through pre-post evaluation of VR experiences. As a result, it was found that in the post-evaluation of VR contents experience, the perception of the visually impaired was increased by 30% compared to the pre-evaluation. In particular, misunderstandings and changes in prejudice toward the visually impaired were remarkable. Through this study, the possibility of a VR-based disability experience education program that can freely construct space-time and maximize the experience was verified. In addition, it laid the foundation to expand it to various fields of improvement of the disabled.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Water droplet generation technique for 3D water drop sculptures (3차원 물방울 조각 생성장치의 구현을 위한 물방울 생성기법)

  • Lin, Long-Chun;Park, Yeon-yong;Jung, Moon Ryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.143-152
    • /
    • 2019
  • This paper presents two new techniques for solving the two problems of the water curtain: 'shape distortion' caused by gravity and 'resolution degradation' caused by fine satellite droplets around the shape. In the first method, when the user converts a three-dimensional model to a vertical sequence of slices, the slices are evenly spaced. The method is to adjust the time points at which the equi-distance slices are created by the nozzle array. In this method, even if the velocity of a water drop increases with time by gravity, the water drop slices maintain the equal interval at the moment of forming the whole shape, thereby preventing distortion. The second method is called the minimum time interval technique. The minimum time interval is the time between the open command of a nozzle and the next open command of the nozzle, so that consecutive water drops are clearly created without satellite drops. When the user converts a three-dimensional model to a sequence of slices, the slices are defined as close as possible, not evenly spaced, considering the minimum time interval of consecutive drops. The slices are arranged in short intervals in the top area of the shape, and the slices are arranged in long intervals in the bottom area of the shape. The minimum time interval is pre-determined by an experiment, and consists of the time from the open command of the nozzle to the time at which the nozzle is fully open, and the time in which the fully open state is maintained, and the time from the close command to the time at which the nozzle is fully closed. The second method produces water drop sculptures with higher resolution than does the first method.

Real-Virtual Fusion Hologram Generation System using RGB-Depth Camera (RGB-Depth 카메라를 이용한 현실-가상 융합 홀로그램 생성 시스템)

  • Song, Joongseok;Park, Jungsik;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.866-876
    • /
    • 2014
  • Generating of digital hologram of video contents with computer graphics(CG) requires natural fusion of 3D information between real and virtual. In this paper, we propose the system which can fuse real-virtual 3D information naturally and fast generate the digital hologram of fused results using multiple-GPUs based computer-generated-hologram(CGH) computing part. The system calculates camera projection matrix of RGB-Depth camera, and estimates the 3D information of virtual object. The 3D information of virtual object from projection matrix and real space are transmitted to Z buffer, which can fuse the 3D information, naturally. The fused result in Z buffer is transmitted to multiple-GPUs based CGH computing part. In this part, the digital hologram of fused result can be calculated fast. In experiment, the 3D information of virtual object from proposed system has the mean relative error(MRE) about 0.5138% in relation to real 3D information. In other words, it has the about 99% high-accuracy. In addition, we verify that proposed system can fast generate the digital hologram of fused result by using multiple GPUs based CGH calculation.

Design of Vision-based Interaction Tool for 3D Interaction in Desktop Environment (데스크탑 환경에서의 3차원 상호작용을 위한 비전기반 인터랙션 도구의 설계)

  • Choi, Yoo-Joo;Rhee, Seon-Min;You, Hyo-Sun;Roh, Young-Sub
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.421-434
    • /
    • 2008
  • As computer graphics, virtual reality and augmented reality technologies have been developed, in many application areas based on those techniques, interaction for 3D space is required such as selection and manipulation of an 3D object. In this paper, we propose a framework for a vision-based 3D interaction which enables to simulate functions of an expensive 3D mouse for a desktop environment. The proposed framework includes a specially manufactured interaction device using three-color LEDs. By recognizing position and color of the LED from video sequences, various events of the mouse and 6 DOF interactions are supported. Since the proposed device is more intuitive and easier than an existing 3D mouse which is expensive and requires skilled manipulation, it can be used without additional learning or training. In this paper, we explain methods for making a pointing device using three-color LEDs which is one of the components of the proposed framework, calculating 3D position and orientation of the pointer and analyzing color of the LED from video sequences. We verify accuracy and usefulness of the proposed device by showing a measurement result of an error of the 3D position and orientation.

Enhanced Image Mapping Method for Computer-Generated Integral Imaging System (집적 영상 시스템을 위한 향상된 이미지 매핑 방법)

  • Lee Bin-Na-Ra;Cho Yong-Joo;Park Kyoung-Shin;Min Sung-Wook
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.295-300
    • /
    • 2006
  • The integral imaging system is an auto-stereoscopic display that allows users to see 3D images without wearing special glasses. In the integral imaging system, the 3D object information is taken from several view points and stored as elemental images. Then, users can see a 3D reconstructed image by the elemental images displayed through a lens array. The elemental images can be created by computer graphics, which is referred to the computer-generated integral imaging. The process of creating the elemental images is called image mapping. There are some image mapping methods proposed in the past, such as PRR(Point Retracing Rendering), MVR(Multi-Viewpoint Rendering) and PGR(Parallel Group Rendering). However, they have problems with heavy rendering computations or performance barrier as the number of elemental lenses in the lens array increases. Thus, it is difficult to use them in real-time graphics applications, such as virtual reality or real-time, interactive games. In this paper, we propose a new image mapping method named VVR(Viewpoint Vector Rendering) that improves real-time rendering performance. This paper describes the concept of VVR first and the performance comparison of image mapping process with previous methods. Then, it discusses possible directions for the future improvements.

Haptic Modeler using Haptic User Interface (촉감 사용자 인터페이스를 이용한 촉감 모델러)

  • Cha, Jong-Eun;Oakley, Ian;Kim, Yeong-Mi;Kim, Jong-Phil;Lee, Beom-Chan;Seo, Yong-Won;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1031-1036
    • /
    • 2006
  • 햅틱 분야는 디스플레이 되는 콘텐츠를 만질 수 있게 촉감을 제공함으로써 의학, 교육, 군사, 방송 분야 등에서 널리 연구되고 있다. 이미 의학 분야에서는 Reachin 사(社)의 복강경 수술 훈련 소프트웨어와 같이 실제 수술 할 때와 같은 힘을 느끼면서 수술 과정을 훈련할 수 있는 제품이 상용화 되어 있다. 그러나 햅틱 분야가 사용자에게 시청각 정보와 더불어 추가적인 촉감을 제공함으로써 보다 실감 있고 자연스러운 상호작용을 제공하는 장점을 가진 것에 비해 아직은 일반 사용자들에게 생소한 분야다. 그 이유 중 하나로 촉감 상호작용이 가능한 콘텐츠의 부재를 들 수 있다. 일반적으로 촉감 콘텐츠는 컴퓨터 그래픽스 모델로 이루어져 있어 일반 그래픽 모델러를 사용하여 콘텐츠를 생성하나 촉감과 관련된 정보는 콘텐츠를 생성하고 나서 파일에 수작업으로 넣어주거나 각각의 어플리케이션마다 직접 프로그램을 해주어야 한다. 이는 그래픽 모델링과 촉감 모델링이 동시에 진행되지 않기 때문에 발생하는 문제로 촉감 콘텐츠를 만드는데 시간이 많이 소요되고 촉감 정보를 추가하는 작업이 직관적이지 못하다. 그래픽 모델링의 경우 눈으로 보면서 콘텐츠를 손으로 조작할 수 있으나 촉감 모델링의 경우 손으로 촉감을 느끼면서 동시에 조작도 해야 하기 때문에 이에 따른 인터페이스가 필요하다. 본 논문에서는 촉감 상호작용이 가능한 촉감 콘텐츠를 직관적으로 생성하고 조작할 수 있게 하는 촉감 모델러를 기술한다. 촉감 모델러에서 사용자는 3 자유도 촉감 장치를 사용하여 3 차원의 콘텐츠를 실시간으로 만져보면서 생성, 조작할 수 있고 촉감 사용자 인터페이스를 통해서 콘텐츠의 표면 촉감 특성을 직관적으로 편집할 수 있다. 촉감 사용자 인터페이스는 마우스로 조작하는 기존의 2차원 그래픽 사용자 인터페이스와는 다르게 3 차원으로 구성되어 있고 촉감 장치로 조작할 수 있는 버튼, 라디오 버튼, 슬라이더, 조이스틱의 구성요소로 이루어져 있다. 사용자는 각각의 구성요소를 조작하여 콘텐츠의 표면 촉감 특성 값을 바꾸고 촉감 사용자 인터페이스의 한 부분을 만져 그 촉감을 실시간으로 느껴봄으로써 직관적으로 특성 값을 정할 수 있다. 또한, XML 기반의 파일 포맷을 제공함으로써 생성된 콘텐츠를 저장할 수 있고 저장된 콘텐츠를 불러오거나 다른 콘텐츠에 추가할 수 있다.

  • PDF

VRML Model Retrieval System Based on XML (XML 기반 VRML 모델 검색 시스템)

  • Im, Min-San;Gwun, O-Bong;Song, Ju-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07a
    • /
    • pp.709-711
    • /
    • 2005
  • 컴퓨터 그래픽스 분야의 발전으로 3D 모델의 수가 기하급수적으로 늘고 있다. 기존의 텍스트나 2D 이미지만을 검색하는 시스템으로는 정확한 3D 모델의 검색이 힘들다. 따라서 3D 모델 검색 시스템의 필요성이 대두되고 많은 분야에서 그 정확도와 속도향상을 위한 3D 모델 검색 연산자(Descriptor)와 검색 알고리즘을 개발하기 위한 연구가 진행 중이다. 본 논문에서는 VRML 모델을 XML 데이터로 변환하여 3D 모델 검색에 사용하는 것이 주요 목표이다. 검색 방법은 크게 VRML의 노드 분류화를 통한 기본 도형에 대한 검색과 XML로 변환하면서 생성하는 무게중심(Mass-Center)을 이용한 검색 두 가지이다. 즉, 3D 모델 데이터베이스를 구축함으로써 VRML 노드를 통한 분류화와 라벨화된 3D 모델 데이터베이스 지원 등의 장점을 활용한다. 3D 모델을 Key값(Descriptor)을 생성하여 분류화된 XML 데이터로 저장하고, 처리하여 유사도 비교의 대상과 횟수가 많아질수록, 3D 모델을 바로 데이터베이스에서 검색에 사용할 수 있어 검색의 속도와 성능을 보다 증가시킬 수 있다. 보다 복잡한 3D 모델의 유사도 비교에 있어서는 Princeton Shape Benchmark(PSB)[1]에서 정확도가 가장 높게 평가된 방법인 LFD(Light Field Descriptor)[6] 검색 연산자를 사용한다. 이 방법은 3D 모델에서 2D 이미지를 얻어 검색하는 방법으로 많은 2D 이미지 관측점(View-Point)과 관측된 2D 이미지의 적합도를 비교하는 계산량이 많은 단점이 있다. 그래서 3D 모델 검색을 위한 2D 이미지 관측에 있어 x, y, z축 방향의 관측점을 얻는 방법을 제안함으로써 2D 이미지의 관측점을 줄여 계산량을 대폭 감소시키는 장점을 갖는다.것으로 조사되었으며 40대 이상의 연령층은 점심비용으로 더 많은 지출을 하고 있는 것으로 나타났다. 4) 끼니별 한식에 대한 선호도는 아침식사의 경우가 가장 높았으며, 이는 40대와 50대에서 높게 나타났다. 점심 식사로 가장 선호되는 음식은 중식, 일식이었으며 저녁 식사에서 가장 선호되는 메뉴는 전 연령층에서 일식, 분식류 이었으며, 한식에 대한 선택 정도는 전 연령층에서 매우 낮게 나타났다. 5) 각 연령층에서 선호하는 한식에 대한 조사에서는 된장찌개가 전 연령층에서 가장 높은 선호도를 나타내었고, 김치는 40대 이상의 선호도가 30대보다 높게 나타났으며, 흥미롭게도 30세 이하의 선호도는 30대보다 높게 나타났다. 그 외에도 떡과 죽에 대한 선호도는 전 연령층에서 낮게 조사되었다. 장아찌류의 선호도는 전 연령대에서 낮았으며 특히 30세 이하에서 매우 낮게 조사되었다. 한식의 맛에 대한 만족도 조사에서는 연령이 올라갈수록 한식의 맛에 대한 만족도는 낮아지고 있었으나, 한식의 맛에 대한 만족도가 높을수록 양과 가격에 대한 만족도는 높은 경향을 나타내었다. 전반적으로 한식에 대한 선호도는 식사 때와 식사 목적에 따라 연령대 별로 다르게 나타나고 있으나, 선호도는 성별이나 세대에 관계없이 폭 넓은 선호도를 반영하고 있으며, 이는 대학생들을 대상으로 하는 연구 등에서도 나타난바 같다. 주 5일 근무제의 확산과 초 중 고생들의 토요일 휴무와 더불어 여행과 엔터테인먼트산업은 더욱 더 발전을 거듭하고 있으며, 외식은 여행과 여가 활동의 필수적인 요소로써 그 역할을 일조하고 있다. 이와 같은 여가시간의 증가는 독신자들에게는 좀더 많은 여유시간을 가족을 이루고 있는 가족구성원들에게는 가족과의 유대를 강화하는 휴식과 오락의 소비 트렌드를 창출시켰다. 이와 더불어 외식은 식사를 해결하기 위한

  • PDF

Real-Time Shadow Generation using Image Warping (이미지 와핑을 이용한 실시간 그림자 생성 기법)

  • Kang, Byung-Kwon;Ihm, In-Sung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.5
    • /
    • pp.245-256
    • /
    • 2002
  • Shadows are important elements in producing a realistic image. Generation of exact shapes and positions of shadows is essential in rendering since it provides users with visual cues on the scene. It is also very important to be able to create soft shadows resulted from area light sources since they increase the visual realism drastically. In spite of their importance. the existing shadow generation algorithms still have some problems in producing realistic shadows in real-time. While image-based rendering techniques can often be effective1y applied to real-time shadow generation, such techniques usually demand so large memory space for storing preprocessed shadow maps. An effective compression method can help in reducing memory requirement, only at the additional decoding costs. In this paper, we propose a new image-barred shadow generation method based on image warping. With this method, it is possible to generate realistic shadows using only small sizes of pre-generated shadow maps, and is easy to extend to soft shadow generation. Our method will be efficiently used for generating realistic scenes in many real-time applications such as 3D games and virtual reality systems.