• Title/Summary/Keyword: MEL Script

Search Result 7, Processing Time 0.02 seconds

3D Simulation of Maze Solver Micromouse using MEL Script (MAYA의 MEL Script로 구현한 미로찾기로봇 3D시뮬레이션)

  • Kim, Min-soo;Lee, Im-geun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.07a
    • /
    • pp.201-202
    • /
    • 2014
  • 본 논문에서는 3차원 공간을 시각적으로 표현이 가능한 MAYA의 MEL Script를 통해 미로 찾기 로봇을 3D시뮬레이션으로 구현하는 방법을 제안한다. MAYA에서는 생성된 개체에 대해 X-Y-Z 좌표정보를 적용하고 이 수치를 개체의 속성 값으로 제공한다. 이를 이용하여 랜덤한 미로 맵을 완성하기 위한 규칙으로 행렬을 생성하고 이 행렬의 인덱스와 값에 따라 X-Y-Z 좌표정보를 적용하여 개체를 생성하면 랜덤한 미로 맵이 완성된다. 그 후 길을 찾기 위한 규칙에 의해 이동하는 로봇의 X-Z좌표정보를 각 프레임 별로 저장하여 재생 시키면 미로 찾기 로봇 시뮬레이션을 눈으로 확인 가능하다. 사람이 직접 번거롭게 임의의 미로 맵 을 생성하지 않고 로봇이 없어도 간편하게 미로 찾기를 구현해 볼 수 있는 방법을 제시한다. 본 시뮬레이터는 미로 찾기 알고리즘을 테스트하는데 유용할 것이다.

  • PDF

Correcting 3D camera tracking data for video composition (정교한 매치무비를 위한 3D 카메라 트래킹 기법에 관한 연구)

  • Lee, Jun-Sang;Lee, Imgeun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.105-106
    • /
    • 2012
  • 일반적으로 CG 합성이라 하면 '자연스러운' 것을 잘된 CG영상이라고 한다. 이 때 촬영된 영상이 정지화면 일 수 만은 없다. 카메라가 움직이는 영상에서는 CG합성도 실사카메라 무빙에 맞게 정확한 정합이 되어야 자연스러운 영상이 된다. 이를 위해 합성단계에서 작업할 때 3D 카메라 트래킹 기술이 필요하다. 카메라트래킹은 촬영된 실사영상만으로 카메라의 3차원 이동정보와 광학적 파라미터 등 촬영시의 3차원 공간을 복원하는 과정을 포함하고 있다. 이 과정에서 카메라 트래킹에 대한 오류의 발생으로 실사와 CG의 합성에 대한 생산성에 많은 문제점을 가지고 있다. 본 논문에서는 이러한 문제를 해결하기 위하여 소프트웨어에서 트래킹데이터를 보정하는 방법을 제안한다.

  • PDF

Jitter Correction of the Face Motion Capture Data for 3D Animation

  • Lee, Junsang;Han, Soowhan;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.39-45
    • /
    • 2015
  • Along with the advance of digital technology, various methods are adopted for capturing the 3D animating data. Especially, in 3D animation production market, the motion capture system is widely used to make films, games, and animation contents. The technique quickly tracks the movements of the actor and translate the data to use as animating character's motion. Thus the animation characters are able to mimic the natural motion and gesture, even face expression. However, the conventional motion capture system needs tricky conditions, such as space, light, number of camera etc. Furthermore the data acquired from the motion capture system is frequently corrupted by noise, drift and surrounding environment. In this paper, we introduce the post production techniques to stabilizing the jitters of motion capture data from the low cost handy system based on Kinect.

A Study on Correcting Virtual Camera Tracking Data for Digital Compositing (디지털영상 합성을 위한 가상카메라의 트래킹 데이터 보정에 관한 연구)

  • Lee, Junsang;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.39-46
    • /
    • 2012
  • The development of the computer widens the expressive ways for the nature objects and the scenes. The cutting edge computer graphics technologies effectively create any images we can imagine. Although the computer graphics plays an important role in filming and video production, the status of the domestic contents production industry is not favorable for producing and research all at the same time. In digital composition, the match moving stage, which composites the captured real sequence with computer graphics image, goes through many complicating processes. The camera tracking process is the most important issue in this stage. This comprises the estimation of the 3D trajectory and the optical parameter of the real camera. Because the estimating process is based only on the captured sequence, there are many errors which make the process more difficult. In this paper we propose the method for correcting the tracking data. The proposed method can alleviate the unwanted camera shaking and object bouncing effect in the composited scene.

Development of A News Event Reenactment System (사건재연 시스템 개발)

  • 윤여천;변혜원;전성규;박창섭
    • Journal of Broadcast Engineering
    • /
    • v.7 no.1
    • /
    • pp.21-27
    • /
    • 2002
  • This paper presents a mews event reenactment system (NERS), which generates virtual character animations in a quick and convenient manner. Thus, NERS can be used to produce computer graphics(CG) scenes of news events that are hard to photograph, such as fire, traffic accident, cases of murder, and so on. By using plenty of captured motion data and CG model data this system produces an appropriate animation of virtual characters straightforwardly without any motion capturing device and actors in the authoring stage. NERS is designed to be capable of making virtual characters move along user-defined paths, stitching motions smoothly and modifyingthe positions and of the articulations of a virtual character in a specific frame. Therefore a virtual character can be controlled precisely so as to interact with the virtual environments and other characters. NERS provides both an interactive and script-based (MEL: Maya Embedded Language) interface so that user can this system in a convenient way. This system has been implemented as a plug-in of commercial CG tool, Maya (Alias/wavefront), in order to make use of its advanced functions

A Study on Sound Synchronized Out-Focusing Techniques for 3D Animation (음원 데이터를 활용한 3D 애니메이션 카메라 아웃포커싱 표현 연구)

  • Lee, Junsang;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.57-65
    • /
    • 2014
  • The role of sound in producing 3D animation clip is one of the important factor to maximize the immersive effects of the scene. Especially interaction between video and sound makes the scene expressions more apparent, which is diversely applied in video production. One of these interaction techniques, the out-focussing technique is frequently used in both real video and 3D animation field. But in 3D animation, out-focussing is not easily implemented as in music videos or explosion scenes in real video shots. This paper analyzes the sound data to synchronize the depth of field with it. The novel out-focussing technique is proposed, where the object's field of depth is controlled by beat rhythm in the sound data.

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF