• Title/Summary/Keyword: 모션 리타겟팅

Search Result 8, Processing Time 0.022 seconds

Body Motion Retargeting to Rig-space (리깅 공간으로의 몸체 동작 리타겟팅)

  • Song, Jaewon;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.3
    • /
    • pp.9-17
    • /
    • 2014
  • This paper presents a method to retarget a source motion to the rig-space parameter for a target character that can be equipped with a complex rig structure as used in traditional animation pipelines. Our solution allows the animators to edit the retargeted motion easily and intuitively as they can work with the same rig parameters that have been used for keyframe animation. To acheive this, we analyze the correspondence between the source motion space and the target rig-space, followed by performing non-linear optimization for the motion retargeting to target rig-space. We observed the general workflow practiced by animators and apply this process to the optimization step.

Interactive Motion Retargeting for Humanoid in Constrained Environment (제한된 환경 속에서 휴머노이드를 위한 인터랙티브 모션 리타겟팅)

  • Nam, Ha Jong;Lee, Ji Hye;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper, we introduce a technique to retarget human motion data to the humanoid body in a constrained environment. We assume that the given motion data includes detailed interactions such as holding the object by hand or avoiding obstacles. In addition, we assume that the humanoid joint structure is different from the human joint structure, and the shape of the surrounding environment is different from that at the time of the original motion. Under such a condition, it is also difficult to preserve the context of the interaction shown in the original motion data, if the retargeting technique that considers only the change of the body shape. Our approach is to separate the problem into two smaller problems and solve them independently. One is to retarget motion data to a new skeleton, and the other is to preserve the context of interactions. We first retarget the given human motion data to the target humanoid body ignoring the interaction with the environment. Then, we precisely deform the shape of the environmental model to match with the humanoid motion so that the original interaction is reproduced. Finally, we set spatial constraints between the humanoid body and the environmental model, and restore the environmental model to the original shape. To demonstrate the usefulness of our method, we conducted an experiment by using the Boston Dynamic's Atlas robot. We expected that out method can help the humanoid motion tracking problem in the future.

Motion Retargetting Simplification for H-Anim Characters (H-Anim 캐릭터의 모션 리타겟팅 단순화)

  • Jung, Chul-Hee;Lee, Myeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.791-795
    • /
    • 2009
  • There is a need for a system independent human data format that doesn't depend on a specific graphics tool or program to use interoperable human data in a network environment. To achieve this, the Web3D Consortium and ISO/IEC JTC1 WG6 developed the international draft standard ISO/IEC 19774 Humanoid Animation(H-Anim). H-Anim defines the data structure for an articulated human figure, but it does not yet define the data for human motion generation. This paper discusses a method of obtaining compatibility and independence of motion data between application programs, and describes a method of simplifying motion retargetting necessary for motion definition of H-Anim characters. In addition, it describes a method of generating H-Anim character animation using arbitrary 3D character models and arbitrary motion capture data without any inter-relations, and its implementation results.

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

A study on retargeting error between motion capture FBX and 3ds Max bipod joints (모션캡쳐 FBX와 3ds Max 바이패드 관절의 리타게팅 오차 연구)

  • SangWon Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.71-72
    • /
    • 2024
  • 본 논문에서는 Adobe Mixamo, 에픽게임즈 마켓플레이스, 유니티 에셋스토어 등에서 제공되는 모션캡쳐 FBX 데이터를 3ds Max 바이패드 애니메이션 시스템에 리타게팅 적용할 때 발생하는 문제점과 해결 방법을 살펴본다. 모션캡쳐 FBX를 바이패드에 리타게팅 하는 과정은 다양한 측면에서 이슈가 발생하는데 본 논문에서는 종아리와 하박 관절의 X축 회전 오차를 중심으로 살펴본다.

  • PDF

Retargetting Fine Facial Motion Data to New Faces (고밀도 얼굴 모션 캡쳐 데이터를 새로운 얼굴로 리타겟팅하는 기법)

  • Na, Kyung-Keon;Jung, Moon-R.
    • Journal of the Korea Computer Graphics Society
    • /
    • v.9 no.3
    • /
    • pp.7-13
    • /
    • 2003
  • 본 논문은 사람의 실제 얼굴에서 캡쳐된 얼굴 모션 데이터를 새로운 얼굴에 재적용하는 리타켓팅 기법을 제안한다. 본 기법은 형태가 매우 상이한 얼굴 모델에도 적용이 가능하며 특히 주름같은 세밀한 모션의 리타켓팅에 적합하다. 본 기법은 다중해상도 메쉬 즉 노말메쉬(normal mesh)를 사용함으로써 소스와 타켓의 계층적으로 대응관계를 결정하고 계층적으로 리타켓팅한다. 노말 메쉬는 주어진 메쉬를 베이스 메쉬 (base mesh)와 일련의 노말 오프셋 (normal offsets)을 이용하여 근사시킨 계층적 메쉬이다. 본 리타켓팅 기법은 우선 베이스 모션을 소스 모델에서 타겟 모델로 리타켓팅한 후 그 모션 위에 노말 오프셋의 모션을 계층적으로 더해준다. 이 기법은 형태가 매우 세밀한 모션에 대하여 안정적이면서도 정교한 리타켓팅 결과를 생성한다.

  • PDF

A study about the problems and their solutions in the production process of 3D character animation using optical motion capture technology (옵티컬 모션캡쳐 기술을 활용한 3D 캐릭터 애니메이션에서 제작과정상 문제점 및 해결방안에 관한 연구)

  • Lee, Man-Woo;Kim, Hyun-Jong;Kim, Soon-Gohn
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.831-835
    • /
    • 2006
  • Motion capture means the recording of movement of objects such as human beings, animals, creatures, machines, etc in a form applicable to computer. Since the motion capture system can be introduced to the fields where realistic movement of human beings and animals, which cannot be attained with an existing key frame method is required, large scale is necessary or economical burden exists, it has a merit and possibility of new expression. For these reasons, this method is increasingly used in the field of digital entertainment such as movie, TV, advertisement, documentary, music video, etc centering around the game. However, in spite of such an advantage, problems such as too much advance preparation work in digital image expressions using motion capture, marker attachment, compensation of motion data, motion retargeting and lack of professional human resources, etc. are becoming a prominent figure. Accordingly, this study intends to suggest the way of more effective production of motion capture digital image through finding the problems and their draft possible solutions in the production process based on the image production examples using motion capture.

  • PDF

Documentation of Intangible Cultural Heritage Using Motion Capture Technology Focusing on the documentation of Seungmu, Salpuri and Taepyeongmu (부록 3. 모션캡쳐를 이용한 무형문화재의 기록작성 - 국가지정 중요무형문화재 승무·살풀이·태평무를 중심으로 -)

  • Park, Weonmo;Go, Jungil;Kim, Yongsuk
    • Korean Journal of Heritage: History & Science
    • /
    • v.39
    • /
    • pp.351-378
    • /
    • 2006
  • With the development of media, the methods for the documentation of intangible cultural heritage have been also developed and diversified. As well as the previous analogue ways of documentation, the have been recently applying new multi-media technologies focusing on digital pictures, sound sources, movies, etc. Among the new technologies, the documentation of intangible cultural heritage using the method of 'Motion Capture' has proved itself prominent especially in the fields that require three-dimensional documentation such as dances and performances. Motion Capture refers to the documentation technology which records the signals of the time varing positions derived from the sensors equipped on the surface of an object. It converts the signals from the sensors into digital data which can be plotted as points on the virtual coordinates of the computer and records the movement of the points during a certain period of time, as the object moves. It produces scientific data for the preservation of intangible cultural heritage, by displaying digital data which represents the virtual motion of a holder of an intangible cultural heritage. National Research Institute of Cultural Properties (NRICP) has been working on for the development of new documentation method for the Important Intangible Cultural Heritage designated by Korean government. This is to be done using 'motion capture' equipments which are also widely used for the computer graphics in movie or game industries. This project is designed to apply the motion capture technology for 3 years- from 2005 to 2007 - for 11 performances from 7 traditional dances of which body gestures have considerable values among the Important Intangible Cultural Heritage performances. This is to be supported by lottery funds. In 2005, the first year of the project, accumulated were data of single dances, such as Seungmu (monk's dance), Salpuri(a solo dance for spiritual cleansing dance), Taepyeongmu (dance of peace), which are relatively easy in terms of performing skills. In 2006, group dances, such as Jinju Geommu (Jinju sword dance), Seungjeonmu (dance for victory), Cheoyongmu (dance of Lord Cheoyong), etc., will be documented. In the last year of the project, 2007, education programme for comparative studies, analysis and transmission of intangible cultural heritage and three-dimensional contents for public service will be devised, based on the accumulated data, as well as the documentation of Hakyeonhwadae Habseolmu (crane dance combined with the lotus blossom dance). By describing the processes and results of motion capture documentation of Salpuri dance (Lee Mae-bang), Taepyeongmu (Kang seon-young) and Seungmu (Lee Mae-bang, Lee Ae-ju and Jung Jae-man) conducted in 2005, this report introduces a new approach for the documentation of intangible cultural heritage. During the first year of the project, two questions have been raised. First, how can we capture motions of a holder (dancer) without cutoffs during quite a long performance? After many times of tests, the motion capture system proved itself stable with continuous results. Second, how can we reproduce the accurate motion without the re-targeting process? The project re-created the most accurate motion of the dancer's gestures, applying the new technology to drew out the shape of the dancers's body digital data before the motion capture process for the first time in Korea. The accurate three-dimensional body models for four holders obtained by the body scanning enhanced the accuracy of the motion capture of the dance.