• Title/Summary/Keyword: Motion Capture Animation

Search Result 123, Processing Time 0.022 seconds

A Study about Control of real working through the synchronized 3D Game Character and Motion Capture System (3D게임 캐릭터와 모션 캡쳐 시스템의 연동을 통한 실사 움직임(Real Working) 제어 연구)

  • Kim, Tae-Yul;Ryu, Seuc-Ho;Kyung, Byung-Pyo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.278-281
    • /
    • 2006
  • In the Contents industry, especially in the field of game contents the area receiving the most spotlight is 3D. With qualitative and quantitative developments in games continue, interest in 3D is becoming more significant. As a result studies and actual application in the area of 3D in the game industry are being most actively conducted. It is true that 3D game characters do not have the same level of natural movement compared to the previous 2D characters. In order to overcome the limitations of movement control of the Key frame method, the method that is currently being rapidly developed is a movement control method using motion capture system. In this study, the focus is placed in production of natural character animation through movement control through linkage of 3D game characters and motion capture system. Hence, production of natural 3D game character animation through a single central action was established as the purpose of this study.

  • PDF

A Study on Comparing algorithms for Boxing Motion Recognition (권투 모션 인식을 위한 알고리즘 비교 연구)

  • Han, Chang-Ho;Kim, Soon-Chul;Oh, Choon-Suk;Ryu, Young-Kee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.6
    • /
    • pp.111-117
    • /
    • 2008
  • In this paper, we describes the boxing motion recognition which is used in the part of games, animation. To recognize the boxing motion, we have used two algorithms, one is principle component analysis, the other is dynamic time warping algorithm. PCA is the simplest of the true eigenvector-based multivariate analyses and often used to reduce multidimensional data sets to lower dimensions for analysis. DTW is an algorithm for measuring similarity between two sequences which may vary in time or speed. We introduce and compare PCA and DTW algorithms respectively. We implemented the recognition of boxing motion on the motion capture system which is developed in out research, and depict the system also. The motion graph will be created by boxing motion data which is acquired from motion capture system, and will be normalized in a process. The result has implemented in the motion recognition system with five actors, and showed the performance of the recognition.

  • PDF

A Data Driven Motion Generation for Driving Simulators Using Motion Texture (모션 텍스처를 이용한 차량 시뮬레이터의 통합)

  • Cha, Moo-Hyun;Han, Soon-Hung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.7 s.262
    • /
    • pp.747-755
    • /
    • 2007
  • To improve the reality of motion simulator, the method of data-driven motion generation has been introduced to simply record and replay the motion of real vehicles. We can achieve high quality of reality from real samples, but it has no interactions between users and simulations. However, in character animation, user controllable motions are generated by the database made up of motion capture signals and appropriate control algorithms. In this study, as a tool for the interactive data-driven driving simulator, we proposed a new motion generation method. We sample the motion data from a real vehicle, transform the data into the appropriate data structure(motion block), and store a series of them into a database. While simulation, our system searches and synthesizes optimal motion blocks from database and generates motion stream reflecting current simulation conditions and parameterized user demands. We demonstrate the value of the proposed method through experiments with the integrated motion platform system.

Design and Realization of Stereo Vision Module For 3D Facial Expression Tracking (3차원 얼굴 표정 추적을 위한 스테레오 시각 모듈 설계 및 구현)

  • Lee, Mun-Hee;Kim, Kyong-Sok
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.533-540
    • /
    • 2006
  • In this study we propose to use a facial motion capture technique to track facial motions and expressions effectively by using the stereo vision module, which has two CMOS IMAGE SENSORS. In the proposed tracking algorithm, a center point tracking technique and correlation tracking technique, based on neural networks, were used. Experimental results show that the two tracking techniques using stereo vision motion capture are able to track general face expressions at a 95.6% and 99.6% success rate, for 15 frames and 30 frames, respectively. However, the tracking success rates(82.7%,99.1%) of the center point tracking technique was far higher than those(78.7%,92.7%) of the correlation tracking technique, when lips trembled.

Creating Stick Figure Animations Based on Captured Motion Data (모션 캡쳐 데이터에 기초한 스틱 피규어애니메이션 제작)

  • Choi, Myung Geol;Lee, Kang Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.1
    • /
    • pp.23-31
    • /
    • 2015
  • We present a method for creating realistic 2D stick figure animations easily and rapidly using captured motion data. Stick figure animations are typically created by drawing a single pose for each frame manually for the entire time interval. In contrast, our method allows the user to summarize an action (e.g. kick, jump) for an extended period of time into a single image in which one or more action lines are drawn over a stick figure to represent the moving directions of body parts. In order to synthesize a series of time-varying poses automatically from the given image, our system first builds a deformable character model that can make arbitrary deformations of the user's stick figure drawing in 2D plane. Then, the system searches for an optimal motion segment that best fits the given pose and action lines from pre-recorded motion database. Deforming the character model to imitate the retrieved motion segment produces the final stick figure animation. We demonstrate the usefulness of our method in creating interesting stick figure animations with little effort through experiments using a variety of stick figure styles and captured motion data.

An Interactive Aerobic Training System Using Vision and Multimedia Technologies

  • Chalidabhongse, Thanarat H.;Noichaiboon, Alongkot
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1191-1194
    • /
    • 2004
  • We describe the development of an interactive aerobic training system using vision-based motion capture and multimedia technology. Unlike the traditional one-way aerobic training on TV, the proposed system allows the virtual trainer to observe and interact with the user in real-time. The system is composed of a web camera connected to a PC watching the user moves. First, the animated character on the screen makes a move, and then instructs the user to follow its movement. The system applies a robust statistical background subtraction method to extract a silhouette of the moving user from the captured video. Subsequently, principal body parts of the extracted silhouette are located using model-based approach. The motion of these body parts is then analyzed and compared with the motion of the animated character. The system provides audio feedback to the user according to the result of the motion comparison. All the animation and video processing run in real-time on a PC-based system with consumer-type camera. This proposed system is a good example of applying vision algorithms and multimedia technology for intelligent interactive home entertainment systems.

  • PDF

Character Motion Control by Using Limited Sensors and Animation Data (제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어)

  • Bae, Tae Sung;Lee, Eun Ji;Kim, Ha Eun;Park, Minji;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.

Realistic Visual Simulation of Water Effects in Response to Human Motion using a Depth Camera

  • Kim, Jong-Hyun;Lee, Jung;Kim, Chang-Hun;Kim, Sun-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1019-1031
    • /
    • 2017
  • In this study, we propose a new method for simulating water responding to human motion. Motion data obtained from motion-capture devices are represented as a jointed skeleton, which interacts with the velocity field in the water simulation. To integrate the motion data into the water simulation space, it is necessary to establish a mapping relationship between two fields with different properties. However, there can be severe numerical instability if the mapping breaks down, with the realism of the human-water interaction being adversely affected. To address this problem, our method extends the joint velocity mapped to each grid point to neighboring nodes. We refine these extended velocities to enable increased robustness in the water solver. Our experimental results demonstrate that water animation can be made to respond to human motions such as walking and jumping.

The Study about the Expression Method of Timing which Produce Movement in the Animation (애니메이션에서 움직임을 연출하는 타이밍 표현방법에 관한 연구)

  • Bang Woo-Song;Kim Soon-Gohn
    • Journal of Game and Entertainment
    • /
    • v.1 no.1
    • /
    • pp.55-62
    • /
    • 2005
  • The expression of the movement in the animation is one of important factors built up the work. It is determined wholly by the experiences of the animator in the animation, on the contrary, the expression of movie depends on the data obtained from the motion capture or the movement of the actors. One of the most important factors is the timing expression in expression of movement of characters and the proper understanding of direction circumstance and the expression of the timing make the animation more plentiful visually and also these are the basic method that introduce the feeling to the characters. In this study we Identify the basic principles of timing expression in the animation, make experiments for the changes of timing by the camera's angle, compare them and show the most proper methods of timing expression.

  • PDF

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.