• Title/Summary/Keyword: 3D Computer Graphics

Search Result 540, Processing Time 0.027 seconds

3DARModeler: a 3D Modeling System in Augmented Reality Environment (3DARModeler : 증강현실 환경 3D 모델링 시스템)

  • Do, Trien Van;Lee, Jeong-Gyu;Lee, Jong-Weon
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.127-136
    • /
    • 2009
  • This paper describes a 3D modeling system in Augmented Reality environment, named 3DARModeler. It can be considered a simple version of 3D Studio Max with necessary functions for a modeling system such as creating objects, applying texture, adding animation, estimating real light sources and casting shadows. The 3DARModeler introduces convenient, and effective human-computer interaction to build 3D models. The 3DARModeler targets nontechnical users. As such, they do not need much knowledge of computer graphics and modeling techniques. All they have to do is select basic objects, customize their attributes, and put them together to build a 3D model in a simple and intuitive way as if they were doing in the real world.

  • PDF

Multi-scale 3D Panor ama Content Augmented System using Depth-map

  • Kim, Cheeyong;Kim, Eung-Kon;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.6
    • /
    • pp.733-740
    • /
    • 2014
  • With the development and spread of 3D display, users can easily experience an augmented reality with 3D features. Therefore, the demand for content of an augmented reality is exponentially growing in various fields. A traditional augmented reality environment was generally created by CG(Computer Graphics) modelling production tools. However, this method takes too much time and efforts to create an augmented environment. To create an augmented environment similar to the real world, everything in the real world should be measured, gone through modeling, and located in an augmented environment. But the time and efforts spent in the creation don't produce the same environment as the real world, making it hard for users to feel the sense of reality. In this study, multi-scale 3D panorama content augmented system is suggested by using a depth-map. By finding matching features from images to add 3D features to an augmented environment, a depth-map is derived and embodied as panorama, producing high-quality augmented content system with a sense of reality. With this study, limits of 2D panorama technologies will be overcome and a sense of reality and immersion will be provided to users with a natural navigation.

Realistic 3D Brush Model for Computer Generated Sumuk Painting (컴퓨터 그래픽 수묵화를 위한 사실적인 3차원 브러쉬 모델)

  • Kang, Hyungjun;Jung, Moon Ryul;Jung, Dong Am
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.3
    • /
    • pp.35-42
    • /
    • 2002
  • 기존의 페인팅 소프트웨어들은 결과적인 면에서 실제 그림과 유사한 결과를 추구할 뿐, 실제 붓의 움직임이나 터치를 사실적으로 재현하려고 하지는 않았다. 이 논문에서는 수묵화를 그리는데 중요한 두 가지 요소 용묵법(用墨法)과 운필법(運筆法)중에서 붓을 운용하는 방법인 운필법(運筆法)을 컴퓨터 그래픽을 통해서 사실적으로 재현하고자 하였다. 이를 위해서 붓을 운용하는데 필요한 수묵 운필의 모든 동작을 캡쳐할 수 있는 5개의 자유도를 가지는 타블렛을 이용하였다. 손으로부터 붓으로, 붓으로부터 물과 먹을 매개로, 종이에 먹이 전달되는 전체 과정을 재현할 수 있는 모델을 설정하고, 이에 따라 사실적인 수묵화 필법을 구현하기 위해 3차원의 붓의 모델, 3차원의 변형 모델, 붓의 털 모델, 교차면 모델), 먹물의 침전 모델 (Ink Deposition Model)로 설정하였고, 이를 통해 실제 수묵화를 그리는 과정과 동일한 제작과정을 통해서 사실적인 붓의 움직임과 번짐이 구현된 수묵화와 유사한 결과물을 얻어낼 수 있었다. 또한 종이에 전달되는 먹과 물의 양을 정확하게 조절하기 위해서 붓의 털과, 교차지점 그리고 종이 모델을 제작하였고, 이들을 통해서 실제 수묵화의 필법을 사실적으로 시뮬레이션 하였다.

  • PDF

Mesh-based ray tracing system using Vulkan (Vulkan을 이용한 메시 기반의 광선추적기)

  • Kim, Ji-On;Shin, Byeong-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.54-56
    • /
    • 2018
  • 이 논문에서는 Graphics 시스템이 점차 광선추적법(Ray-tracing) 기반으로 전환되고 있는 점과, OpenGL, Direct3D 등의 3차원 그래픽스 API가 삼각형 메시를 주로 사용하는 점에서 착안하여 크로노스 그룹의 차세대 그래픽 표준인 Vulkan API를 이용하여 광선추적기를 개발하였다. 여기에 삼각형 메시를 적용하여 성능평가를 수행하였다.

Neural Relighting using Specular Highlight Map (반사 하이라이트 맵을 이용한 뉴럴 재조명)

  • Lee, Yeonkyeong;Go, Hyunsung;Lee, Jinwoo;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.87-97
    • /
    • 2020
  • In this paper, we propose a novel neural relighting that infers a relighted rendering image based on the user-guided specular highlight map. The proposed network utilizes a pre-trained neural renderer as a backbone network learned from the rendered image of a 3D scene with various lighting conditions. We jointly optimize a 3D light position and its associated relighted image by back-propagation, so that the difference between the base image and the relighted image is similar to the user-guided specular highlight map. The proposed method has the advantage of being able to explicitly infer the 3D lighting position, while providing the artists' preferred 2D screen-space interface. The performance of the proposed network was measured under the conditions that can establish ground truths, and the average error rate of light position estimations is 0.11, with the normalized 3D scene size.

3D Face Modeling Using Mesh Simplification (메쉬 간략화를 이용한 3차원 얼굴모델링)

  • 이현철;허기택
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.4
    • /
    • pp.69-76
    • /
    • 2003
  • Recently, in computer graphics, researches on 3D animations have been very active. one of the important research areas in 3D animation is animation of human being. The creation and animation of 3D facial models has depended on animators' manual work frame by frame. Thus, it needs many efforts and time as well as various hardwares and softwares. In this paper, we implements a way to generation 3D human face model easily and quickly just with the front face images. Then, we suggests a methodology for mesh data simplification of 3D generic model.

  • PDF

Real-time Full-view 3D Human Reconstruction using Multiple RGB-D Cameras

  • Yoon, Bumsik;Choi, Kunwoo;Ra, Moonsu;Kim, Whoi-Yul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.224-230
    • /
    • 2015
  • This manuscript presents a real-time solution for 3D human body reconstruction with multiple RGB-D cameras. The proposed system uses four consumer RGB/Depth (RGB-D) cameras, each located at approximately $90^{\circ}$ from the next camera around a freely moving human body. A single mesh is constructed from the captured point clouds by iteratively removing the estimated overlapping regions from the boundary. A cell-based mesh construction algorithm is developed, recovering the 3D shape from various conditions, considering the direction of the camera and the mesh boundary. The proposed algorithm also allows problematic holes and/or occluded regions to be recovered from another view. Finally, calibrated RGB data is merged with the constructed mesh so it can be viewed from an arbitrary direction. The proposed algorithm is implemented with general-purpose computation on graphics processing unit (GPGPU) for real-time processing owing to its suitability for parallel processing.

Accelerating Depth Image-Based Rendering Using GPU (GPU를 이용한 깊이 영상기반 렌더링의 가속)

  • Lee, Man-Hee;Park, In-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.11
    • /
    • pp.853-858
    • /
    • 2006
  • In this paper, we propose a practical method for hardware-accelerated rendering of the depth image-based representation(DIBR) of 3D graphic object using graphic processing unit(GPU). The proposed method overcomes the drawbacks of the conventional rendering, i.e. it is slow since it is hardly assisted by graphics hardware and surface lighting is static. Utilizing the new features of modem GPU and programmable shader support, we develop an efficient hardware-accelerating rendering algorithm of depth image-based 3D object. Surface rendering in response of varying illumination is performed inside the vertex shader while adaptive point splatting is performed inside the fragment shader. Experimental results show that the rendering speed increases considerably compared with the software-based rendering and the conventional OpenGL-based rendering method.

OpenGL ES 2.0 based Shader Compilation Method for the Instruction-Level Parallelism (OpenGL ES 2.0 기반 셰이더 명령어 병렬 처리를 위한 컴파일 기법)

  • Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.69-76
    • /
    • 2008
  • In this paper, we present the architecture of graphics processor and its instruction format for the mobile device. In addition, we introduce tile shader data structure for the on/off-line compilation based on the OpenGL ES 2.0 and a new optimization method based on the ILP(Instruction-Level Parallelism). This paper shows where a processor with the sane core clock is being used, the shader instruction resulted from the compile structure and method in this paper is approximately 1.5 to 2 times faster than a code based on the single instruction.

  • PDF

Implementation of a Physically Based Motion Engine (물리 기반 모션 엔진의 구현)

  • 정일권;박기주;이인호
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1415-1418
    • /
    • 2003
  • Recent performance improvement in computer and graphics hardware makes it possible to simulate a physical phenomenon in real time. VR department at ETRI has implemented a fast and robust physically based motion engine (PBM) for their general-purpose 3D online game engine. 'Dream 3D'. This paper shows the underlying algorithms of the PBM and introduces the structure and implementation results of it briefly.

  • PDF