• Title/Summary/Keyword: Motion Capture Animation

Search Result 123, Processing Time 0.022 seconds

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

A Research on Effect of Motion Modality on Aspects of Genre and Medium (모션의 양태성이 매체·장르에 미치는 효과 연구)

  • Lee, Yong-Soo
    • Cartoon and Animation Studies
    • /
    • s.28
    • /
    • pp.125-153
    • /
    • 2012
  • This study is regarding what motion modality has influences on aspects of genre and media. Nowadays, motion modality becomes an element which is actively manipulated in live actions as well as animations. And, it has been generally accepted that a strategy on manipulation of motion depends on matters of genre or medium. As respected, can we say the premise is correct? if it is, can we refine the premise more theoretically, without using the words, 'genre' or 'media' which have not been are not been defined on an academic aspect. I intend to discuss on this issue. I will speculate the issue of manipulation of motion modality on theoretical bases. According to McLuhan's hot/cool media theory and Bolter's oscillation theory, the effect of it turns out to be same as manipulation of sentience ratio of media readers, and the ideal result will be examined by an example analysis. In the analysis, I will explore what effects manipulation of motion has on several examples. Then, by examining what kind of correlation these effects has with media/genre positioning, I ultimately will evaluate the genre/medium-based determinism of motion modality, which is represented by a typical premise like "Animation is most realistic when it has the most animation-like movement." Conclusively I suggest a refined premise like the following. Modality of motion is a strategy depending on issues of genre, with no regard to issues of medium, and the issue of genre is a category which is positioned considering mixture ratio of sentience. With this reason, A strategy of modality of motion depends to sentience a certain sequence needs. So, motion modality becomes a strategy which has to be approached in functionality of genre rather than economic values which spring from technical devices like a motion capture.

Simulation of Virtual Marionette with 3D Animation Data (3D Animation Data를 활용한 가상 Marionette 시뮬레이션)

  • Oh, Eui-Sang;Sung, Jung-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.12
    • /
    • pp.1-9
    • /
    • 2009
  • A doll created by various materials is a miniature based on human model, and it has been one of components in a puppet show to take some responsibility for human's culture activity. However, demand and supply keeps on the decrease in the puppet show industry, since professional puppeteer has been reduced rapidly, and also it is difficult to initiate into the skill. Therefore, many studies related Robotic Marionette for automation of puppet show have been internationally accompanied, and more efficient structure design and process development are required for better movement and express of puppet with motor based controller. In this research, we suggest the effective way to enable to express the marionette's motion using motion data based on motion capture and 3D graphic program, and through applying of 3D motion data and proposal of simulation process, it will be useful to save time and expenses when the Robotic Marionette System is practically built.

Development and Application of Automatic Motion Generator for Game Characters (게임 캐릭터를 위한 자동동작생성기의 개발과 응용)

  • Ok, Soo-Yol;Kang, Young-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.8
    • /
    • pp.1363-1369
    • /
    • 2008
  • As game and character animation industries are growing, techniques for reproducing realistic character behaviors have been required in various fields. Therefore, intensive researches have been performed in order to find various methods for realistic character animation. The most common approaches to character animation involves tedious user input method, simulation with physical laws based on dynamics, and measurement of actors' behaviors with input devices such as motion capture system. These approaches have their own advantages, but they all have common disadvantage in character control. In order to provide users with convenient control, the realistic animation must be generated with high-level parameters, and the modification should also be made with high-level parameters. In this paper we propose techniques for developing an automated character animation tool which operates with high-level parameters, and introduce techniques for developing actual games by utilizing this tool.

A Study on Game Character Rigging for Root Motion (루트 모션을 위한 게임 캐릭터 리깅 연구)

  • SangWon Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.163-164
    • /
    • 2023
  • 실시간 3D 렌더링 게임의 제작 환경에서 캐릭터의 움직임은 모션 캡처(motion capture)를 통해 만들거나 애니메이터에 의해 제작된다. 걷기나 달리기 등 일정한 속도로 캐릭터가 움직이는 모션은 캐릭터가 제자리(in place)에서 움직이도록 한 뒤에 게임에서 프로그램에 의해 일정한 속도로 움직임으로써 구현할 수 있다. 하지만 일정하지 않은 속도로 움직이는 모션을 같은 방식으로 적용하면 캐릭터의 이동이 어색해진다. 이런 어색함을 보완하기 위해 언리얼이나 유니티 3D 등의 엔진에서는 루트 모션(root motion) 기능을 사용하고 있다. 그런데 루트 모션을 위한 계층 구조는 애니메이터의 작업 효율을 위한 계층 구조와 다른 측면이 있다. 본 논문에서는 3ds Max를 사용하여 애니메이터 친화적이고 루트 모션에도 적합한 캐릭터 리깅을 제시한다.

  • PDF

A study on retargeting error between motion capture FBX and 3ds Max bipod joints (모션캡쳐 FBX와 3ds Max 바이패드 관절의 리타게팅 오차 연구)

  • SangWon Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.71-72
    • /
    • 2024
  • 본 논문에서는 Adobe Mixamo, 에픽게임즈 마켓플레이스, 유니티 에셋스토어 등에서 제공되는 모션캡쳐 FBX 데이터를 3ds Max 바이패드 애니메이션 시스템에 리타게팅 적용할 때 발생하는 문제점과 해결 방법을 살펴본다. 모션캡쳐 FBX를 바이패드에 리타게팅 하는 과정은 다양한 측면에서 이슈가 발생하는데 본 논문에서는 종아리와 하박 관절의 X축 회전 오차를 중심으로 살펴본다.

  • PDF

3D Character Motion Synthesis and Control Method for Navigating Virtual Environment Using Depth Sensor (깊이맵 센서를 이용한 3D캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법)

  • Sung, Man-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.827-836
    • /
    • 2012
  • After successful advent of Microsoft's Kinect, many interactive contents that control user's 3D avatar motions in realtime have been created. However, due to the Kinect's intrinsic IR projection problem, users are restricted to face the sensor directly forward and to perform all motions in a standing-still position. These constraints are main reasons that make it almost impossible for the 3D character to navigate the virtual environment, which is one of the most required functionalities in games. This paper proposes a new method that makes 3D character navigate the virtual environment with highly realistic motions. First, in order to find out the user's intention of navigating the virtual environment, the method recognizes walking-in-place motion. Second, the algorithm applies the motion splicing technique which segments the upper and the lower motions of character automatically and then switches the lower motion with pre-processed motion capture data naturally. Since the proposed algorithm can synthesize realistic lower-body walking motion while using motion capture data as well as capturing upper body motion on-line puppetry manner, it allows the 3D character to navigate the virtual environment realistically.

Method of Automatic Reconstruction and Animation of Skeletal Character Using Metacubes (메타큐브를 이용한 캐릭터 골격 및 애니메이션 자동 생성 방법)

  • Kim, Eun-Seok;Hur, Gi-Taek;Youn, Jae-Hong
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.11
    • /
    • pp.135-144
    • /
    • 2006
  • Implicit surface model is convenient for modeling objects composed of complicated surfaces such as characters and liquids. Moreover, it can express various forms of surface using a relatively small amount of data. In addition, it can represent both the surface and the volume of objects. Therefore, the modeling technique can be applied efficiently to deformation of objects and 3D animation. However, the existing implicit primitives are parallel to the axis or symmetrical with respect to the axes. Thus it is not easy to use them in modeling objects with various forms of motions. In this paper, we propose an efficient animation method for modeling various poses of characters according to matching with motion capture data by adding the attribute of rotation to metacube which is one of the implicit primitives.

  • PDF

Study of Next Generation Game Animation (넥스트 제너레이션 게임애니메이션 연구)

  • Park, Hong-Kyu
    • Cartoon and Animation Studies
    • /
    • s.13
    • /
    • pp.223-236
    • /
    • 2008
  • The video game industry is obsessed by the perception of "Next Generation Game". Appearance of the next generation game console has required the video game industry to renovate new technologies for their entire production. This tendency increases a huge mount of production cost. Game companies have to hire more designers to create a solid concept, artists to generate more detailed content, and programmers to optimize for more complex hardware. All those high cost efforts provide great locking games, but the potential of next generation game consoles does not end there. They also bring possibilities of the new types of gameplay. Next generation game contains a much larger pool of memories for every video game elements. The entire video game used to use roughly 800 animation files, but next generation game is pushing scripted event well over 4000 animation flies. That allows a lot of very unique custom animation for pretty much every action in the game. It gives game players much more vivid and realistic appreciation of the virtual world. Players are not being able to see any recycling of the same animation over and over when they are playing next generation game. The main purpose of this thesis is that defines the concept of next generation game and analyzes new animation-pipeline to be used in the shooter games.

  • PDF

Facial Expression Animation which Applies a Motion Data in the Vector based Caricature (벡터 기반 캐리커처에 모션 데이터를 적용한 얼굴 표정 애니메이션)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.90-98
    • /
    • 2010
  • This paper describes methodology which enables user in order to generate facial expression animation of caricature which applies a facial motion data in the vector based caricature. This method which sees was embodied with the plug-in of illustrator. And It is equipping the user interface of separate way. The data which is used in experiment attaches 28 small-sized markers in important muscular part of the actor face and captured the multiple many expression which is various with Facial Tracker. The caricature was produced in the bezier curve form which has a respectively control point from location of the important marker which attaches in the face of the actor when motion capturing to connection with motion data and the region which is identical. The facial motion data compares in the caricature and the spatial scale went through a motion calibration process too because of size. And with the user letting the control did possibly at any time. In order connecting the caricature and the markers also, we did possibly with the click the corresponding region of the caricature, after the user selects each name of the face region from the menu. Finally, this paper used a user interface of illustrator and in order for the caricature facial expression animation generation which applies a facial motion data in the vector based caricature to be possible.