• Title/Summary/Keyword: Motion Capture Animation

Search Result 124, Processing Time 0.027 seconds

Motion Retargetting Simplification for H-Anim Characters (H-Anim 캐릭터의 모션 리타겟팅 단순화)

  • Jung, Chul-Hee;Lee, Myeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.791-795
    • /
    • 2009
  • There is a need for a system independent human data format that doesn't depend on a specific graphics tool or program to use interoperable human data in a network environment. To achieve this, the Web3D Consortium and ISO/IEC JTC1 WG6 developed the international draft standard ISO/IEC 19774 Humanoid Animation(H-Anim). H-Anim defines the data structure for an articulated human figure, but it does not yet define the data for human motion generation. This paper discusses a method of obtaining compatibility and independence of motion data between application programs, and describes a method of simplifying motion retargetting necessary for motion definition of H-Anim characters. In addition, it describes a method of generating H-Anim character animation using arbitrary 3D character models and arbitrary motion capture data without any inter-relations, and its implementation results.

Development of a Low-cost Monocular PSD Motion Capture System with Two Active Markers at Fixed Distance (일정간격의 두 능동마커를 이용한 저가형 단안 PSD 모션캡쳐 시스템 개발)

  • Seo, Pyeong-Won;Kim, Yu-Geon;Han, Chang-Ho;Ryu, Young-Kee;Oh, Choon-Suk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.2
    • /
    • pp.61-71
    • /
    • 2009
  • In this paper, we propose a low-cost and compact motion capture system which enables to play motion games in PS2(Play Station 2). Recently, motion capture systems which are being used as a part in film producing and making games are too expensive and enormous systems. Now days, motion games using common USB camera are slow and have two-dimension recognition. But PSD sensor has a few good points, such as fast and low-cost. In recently year, 3D motion capture systems using 2D PSD (Position Sensitive Detector) optic sensor for motion capturing have been developed. One is Multi-PSD motion capture system applying stereo vision and another is Single-PSD motion capture system applying optical theory ship. But there are some problems to apply them to motion games. The Multi-PSD is high-cost and complicated because of using two more PSD Camera. It is so difficult to make markers having omni-direction equal intensity in Single-PSD. In this research, we propose a new theory that solves aforementioned problems. It can measure 3D coordination if separated two marker's intensity is equal to. We made a system based on this theory and experimented for performance capability. As a result, we were able to develop a motion capture system which is a single, low-cost, fast, compact, wide-angle and an adaptable motion games. The developed system is expected to be useful in animation, movies and games.

Generation of Adaptive Walking Motion for Uneven Terrain (다양한 지형에서의 적응적인 걷기 동작 생성)

  • 송미영;조형제
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1092-1101
    • /
    • 2003
  • Most of 3D character animation adjusts the gait of their characters for various terrains, using motion capture data through the motion capture equipments. This motion capture data can be naturally presented as real human motions, which are to be adjusted according to the various types of terrain. In addition, there would be a difficulty applying motion capture data for other characters in which the motion data will be captured again or edited for the existing motion data. Therefore, this paper proposes a method that is to generate walking motion for various terrains, such as flat, inclined plane, stair, and irregular face, and a method that is to calculate the trajectory of the swing leg and pelvis. These methods are able to generate various gaits controlled by the parameters of body height, walking speed, stride, etc. In addition, the positions and angles of joint can be calculated by using inverse kinematics, and the cubic spline will be used to calculate the trajectory of the joint.

Motion correction captured by Kinect based on synchronized motion database (동기화된 동작 데이터베이스를 활용한 Kinect 포착 동작의 보정 기술)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.41-47
    • /
    • 2017
  • In this paper, we present a method for data-driven correction of the noisy motion data captured from a low-end RGB-D camera such as the Kinect device. For this purpose, our key idea is to construct a synchronized motion database captured with Kinect and additional specialized motion capture device simultaneously, so that the database contains a set of erroneous poses from Kinect and their corresponding correct poses from the mocap device together. In runtime, given motion captured data from Kinect, we search the similar K candidate Kinect poses from the database, and synthesize a new motion only by using their corresponding poses from the mocap device. We present how to build such motion database effectively, and provide a method for querying and searching a desired motion from the database. We also adapt the laze learning framework to synthesize the corrected poses from the querying results.

Motion Patches (모션 패치)

  • Choi, Myung-Geol;Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.1_2
    • /
    • pp.119-127
    • /
    • 2006
  • Real-time animation of human figures in virtual environments is an important problem in the context of computer games and virtual environments. Recently, the use of large collections of captured motion data have added increased realism in character animation. However, assuming that the virtual environment is large and complex, the effort of capturing motion data in a physical environment and adapting them to an extended virtual environment is the bottleneck for achieving interactive character animation and control. We present a new technique for allowing our animated characters to navigate through a large virtual environment, which is constructed using a small set of building blocks. The building blocks can be tiled or aligned with a repeating pattern to create a large environment. We annotate each block with a motion patch, which informs what motions are available for animated characters within the block. We demonstrate the versatility and flexibility of our approach through examples in which multiple characters are animated and controlled at interactive rates in large, complex virtual environments.

A Method for Generating Inbetween Frames in Sign Language Animation (수화 애니메이션을 위한 중간 프레임 생성 방법)

  • O, Jeong-Geun;Kim, Sang-Cheol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.5
    • /
    • pp.1317-1329
    • /
    • 2000
  • The advanced techniques for video processing and computer graphics enables a sign language education system to appear. the system is capable of showing a sign language motion for an arbitrary sentence using the captured video clips of sign language words. In this paper, a method is suggested which generates the frames between the last frame of a word and the first frame of its following word in order to animate hand motion. In our method, we find hand locations and angles which are required for in between frame generation, capture and store the hand images at those locations and angles. The inbetween frames generation is simply a task of finding a sequence of hand angles and locations. Our method is computationally simple and requires a relatively small amount of disk space. However, our experiments show that inbetween frames for the presentation at about 15fps (frame per second) are achieved so tat the smooth animation of hand motion is possible. Our method improves on previous works in which computation cost is relativey high or unnecessary images are generated.

  • PDF

Training Avatars Animated with Human Motion Data (인간 동작 데이타로 애니메이션되는 아바타의 학습)

  • Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.231-241
    • /
    • 2006
  • Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method for training avatar behaviors from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on machine learning technique, called Q-teaming, our training method allows the avatar to learn how to act in any given situation through trial-and-error interactions with a dynamic environment. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

Distance Measuring Method for Motion Capture Animation (모션캡쳐 애니메이션을 위한 거리 측정방법)

  • Lee, Heei-Man;Seo, Jeong-Man;Jung, Suun-Key
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.129-138
    • /
    • 2002
  • In this paper, a distance measuring algorithm for motion capture using color stereo camera is proposed. The color markers attached on articulations of an actor are captured by stereo color video cameras, and color region which has the same color of the marker's color in the captured images is separated from the other colors by finding dominant wavelength of colors. Color data in RGB (red, green, blue) color space is converted into CIE (Commission Internationale del'Eclairage) color space for the purpose of calculating wavelength. The dominant wavelength is selected from histogram of the neighbor wavelengths. The motion of the character in the cyber space is controlled by a program using the distance information of the moving markers.

A Study on 3D Rendering based on Freeware (Freeware를 활용한 3차원 Rendering에 관한 연구)

  • Kim, Yong-Gwan
    • Cartoon and Animation Studies
    • /
    • s.15
    • /
    • pp.123-137
    • /
    • 2009
  • This thesis is about to find possible opportunity to use Freeware Software in development and application of Digital Contents Creation software. There are 2D composition and Editing software, 3D production software and rendering Software as major part and motion capture, 3D Digitizing and other software operate in and out facilities in small part in Digital Contents Creation Software area. Most of Digital Contents Creation Software made by foreign countries such as USA and Canadian film, game and animation Digital Contents Creation industry, this causes higher production cost and lower profit of studios and usage of illegal copy of Digital Contents Creation Software as well. This thesis tried to present a solution to use Freeware Software in production process by researching and analyzing international and domestic software market and global trend of Freeware Software and present Freeware Software software in each production steps. This thesis include performance test of commercial render Freeware software.

  • PDF

Body Motion Retargeting to Rig-space (리깅 공간으로의 몸체 동작 리타겟팅)

  • Song, Jaewon;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.3
    • /
    • pp.9-17
    • /
    • 2014
  • This paper presents a method to retarget a source motion to the rig-space parameter for a target character that can be equipped with a complex rig structure as used in traditional animation pipelines. Our solution allows the animators to edit the retargeted motion easily and intuitively as they can work with the same rig parameters that have been used for keyframe animation. To acheive this, we analyze the correspondence between the source motion space and the target rig-space, followed by performing non-linear optimization for the motion retargeting to target rig-space. We observed the general workflow practiced by animators and apply this process to the optimization step.