• Title/Summary/Keyword: motion capture technology

Search Result 186, Processing Time 0.028 seconds

A Study on the Creative Process of Creative Ballet <Youth> through Motion Capture Technology (모션캡처 활용을 통한 창작발레<청춘>창작과정연구)

  • Chang, So-Jung; Park, Arum
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.809-814
    • /
    • 2023
  • Currently, there is a lack of research that directly applies and integrates science and technology in the field of dance and translates it into creative work. In this study, the researcher applied motion capture to creative dance performance 'Youth' and described the process of incorporating motion capture into scenes for the performance. The research method involved utilizing practice-based research, which derives new knowledge and meaning from creative outcomes through the analysis of phenomena and experiences generated on-site. The creative ballet performance "<Youth>" consists of a total of 4 scenes, and the motion-captured video in these scenes serves as the highlight moments. It visually represents the image of a past ballerina while embodying the meaning of a scene that is both the 'past me' and the 'dream of the present.' The use of motion capture enhances the visual representation of the scenes and plays a role in increasing the audience's immersion. The dance field needs to become familiar with collaborating with scientific and technological advancements like motion capture to digitize intangible assets. It is essential to engage in experimental endeavors and continue training for such collaborations. Furthermore, through collaboration, the ongoing research should extend the scope of movement through digitized processes, performances, and performance records. This will continually confer value and meaning to the field of dance

A motion capture and mimic system for intelligent interactions (지능 접속을 위한 인체 운동 포착 및 재현 시스템)

  • Yoon, Joong-Sun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.5
    • /
    • pp.585-592
    • /
    • 1999
  • A new paradigm of technology, based on the overall interactions of technology, human and environment, is explored. History of technology and machines is reviewed in terms of the interactions of human and machines. Two main concepts of intelligent interactions proposed, holism and embodiment, are based on the interactions of machines and human through human body : Korperlichkeit ( corporeality). Human body movements are the result of long periods of evolution and, thus, are very optimized motions. Complicated and flexible motions could be easily achieved by mimicking human body movements. Motion capture and mimic systems based on the electromagnetic, visual, and gyroscopic type trackers, are being implemented to demonstrate these concepts. Also, various motion mappings are investigated on these interactive systems. By exploring a new paradigm of technology through Korperlichkeit, an oriental view of technology as relativities may evolve to embrace the limitations of western view of machines as an absolute independent form.

  • PDF

Recognition of Fighting Motion using a 3D-Chain Code and HMM (3차원 체인코드와 은닉마르코프 모델을 이용한 권투모션 인식)

  • Han, Chang-Ho;Oh, Choon-Suk;Choi, Byung-Wook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.8
    • /
    • pp.756-760
    • /
    • 2010
  • In this paper, a new method to recognize various motions of fighting with an aid of HMM is proposed. There are four kinds of fighting motion such as hook, jab, uppercut, and straight as the fighting motion. The motion graph is generalized to define each motion in motion data and the new 3D-chain code is used to convert motion data to motion graphs. The recognition experiment has been performed with HMM algorithm on motion graphs. The motion data is captured by a motion capture system developed in this study and by five actors. Experimental results are given with relatively high recognition rate of at least 85%.

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology -Based on <Beast Town> as an Example-

  • YuanZi Sang;KiHong Kim;JuneSok Lee;JiChu Tang;GaoHe Zhang;ZhengRan Liu;QianRu Liu;ShiJie Sun;YuTing Wang;KaiXing Wang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.106-123
    • /
    • 2024
  • This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.

Adaptation of Motion Capture Data of Human Arms to a Humanoid Robot Using Optimization

  • Kim, Chang-Hwan;Kim, Do-Ik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2126-2131
    • /
    • 2005
  • Interactions of a humanoid with a human are important, when the humanoid is requested to provide people with human-friendly services in unknown or uncertain environment. Such interactions may require more complicated and human-like behaviors from the humanoid. In this work the arm motions of a human are discussed as the early stage of human motion imitation by a humanoid. A motion capture system is used to obtain human-friendly arm motions as references. However the captured motions may not be applied directly to the humanoid, since the differences in geometric or dynamics aspects as length, mass, degrees of freedom, and kinematics and dynamics capabilities exist between the humanoid and the human. To overcome this difficulty a method to adapt captured motions to a humanoid is developed. The geometric difference in the arm length is resolved by scaling the arm length of the humanoid with a constant. Using the scaled geometry of the humanoid the imitation of actor's arm motions is achieved by solving an inverse kinematics problem formulated using optimization. The errors between the captured trajectories of actor arms and the approximated trajectories of humanoid arms are minimized. Such dynamics capabilities of the joint motors as limits of joint position, velocity and acceleration are also imposed on the optimization problem. Two motions of one hand waiving and performing a statement in sign language are imitated by a humanoid through dynamics simulation.

  • PDF

Body Segment Length and Joint Motion Range Restriction for Joint Errors Correction in FBX Type Motion Capture Animation based on Kinect Camera (키넥트 카메라 기반 FBX 형식 모션 캡쳐 애니메이션에서의 관절 오류 보정을 위한 인체 부위 길이와 관절 가동 범위 제한)

  • Jeong, Ju-heon;Kim, Sang-Joon;Yoon, Myeong-suk;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.405-417
    • /
    • 2020
  • Due to the popularization of the Extended Reality, research is actively underway to implement human motion in real-time 3D animation. In particular, Microsoft developed Kinect cameras for 3D motion information can be obtained without the burden of facilities and with simple operation, real-time animation can be generated by combining with 3D formats such as FBX. Compared to the marker-based motion capture system, however, Kinect has low accuracy due to its lack of estimated performance of joint information. In this paper, two algorithms are proposed to correct joint estimation errors in order to realize natural human motion in motion capture animation system in Kinect camera-based FBX format. First, obtain the position information of a person with a Kinect and create a depth map to correct the wrong joint position value using the human body segment length constraint information, and estimate the new rotation value. Second, the pre-set joint motion range constraint is applied to the existing and estimated rotation value and implemented in FBX to eliminate abnormal behavior. From the experiment, we found improvements in human behavior and compared errors between algorithms to demonstrate the superiority of the system.

Documentation of Intangible Cultural Heritage Using Motion Capture Technology Focusing on the documentation of Seungmu, Salpuri and Taepyeongmu (부록 3. 모션캡쳐를 이용한 무형문화재의 기록작성 - 국가지정 중요무형문화재 승무·살풀이·태평무를 중심으로 -)

  • Park, Weonmo;Go, Jungil;Kim, Yongsuk
    • Korean Journal of Heritage: History & Science
    • /
    • v.39
    • /
    • pp.351-378
    • /
    • 2006
  • With the development of media, the methods for the documentation of intangible cultural heritage have been also developed and diversified. As well as the previous analogue ways of documentation, the have been recently applying new multi-media technologies focusing on digital pictures, sound sources, movies, etc. Among the new technologies, the documentation of intangible cultural heritage using the method of 'Motion Capture' has proved itself prominent especially in the fields that require three-dimensional documentation such as dances and performances. Motion Capture refers to the documentation technology which records the signals of the time varing positions derived from the sensors equipped on the surface of an object. It converts the signals from the sensors into digital data which can be plotted as points on the virtual coordinates of the computer and records the movement of the points during a certain period of time, as the object moves. It produces scientific data for the preservation of intangible cultural heritage, by displaying digital data which represents the virtual motion of a holder of an intangible cultural heritage. National Research Institute of Cultural Properties (NRICP) has been working on for the development of new documentation method for the Important Intangible Cultural Heritage designated by Korean government. This is to be done using 'motion capture' equipments which are also widely used for the computer graphics in movie or game industries. This project is designed to apply the motion capture technology for 3 years- from 2005 to 2007 - for 11 performances from 7 traditional dances of which body gestures have considerable values among the Important Intangible Cultural Heritage performances. This is to be supported by lottery funds. In 2005, the first year of the project, accumulated were data of single dances, such as Seungmu (monk's dance), Salpuri(a solo dance for spiritual cleansing dance), Taepyeongmu (dance of peace), which are relatively easy in terms of performing skills. In 2006, group dances, such as Jinju Geommu (Jinju sword dance), Seungjeonmu (dance for victory), Cheoyongmu (dance of Lord Cheoyong), etc., will be documented. In the last year of the project, 2007, education programme for comparative studies, analysis and transmission of intangible cultural heritage and three-dimensional contents for public service will be devised, based on the accumulated data, as well as the documentation of Hakyeonhwadae Habseolmu (crane dance combined with the lotus blossom dance). By describing the processes and results of motion capture documentation of Salpuri dance (Lee Mae-bang), Taepyeongmu (Kang seon-young) and Seungmu (Lee Mae-bang, Lee Ae-ju and Jung Jae-man) conducted in 2005, this report introduces a new approach for the documentation of intangible cultural heritage. During the first year of the project, two questions have been raised. First, how can we capture motions of a holder (dancer) without cutoffs during quite a long performance? After many times of tests, the motion capture system proved itself stable with continuous results. Second, how can we reproduce the accurate motion without the re-targeting process? The project re-created the most accurate motion of the dancer's gestures, applying the new technology to drew out the shape of the dancers's body digital data before the motion capture process for the first time in Korea. The accurate three-dimensional body models for four holders obtained by the body scanning enhanced the accuracy of the motion capture of the dance.

Incremental Image-Based Motion Rendering Technique for Implementation of Realistic Computer Animation (사실적인 컴퓨터 애니메이션 구현을 위한 증분형 영상 기반 운동 렌더링 기법)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.103-112
    • /
    • 2008
  • Image-based motion capture technology is often used in making realistic computer animation. In this paper we try to implement image-based motion rendering by fixing a camera to a PC. Existing image-based rendering algorithms have disadvantages of high computational burden or low accuracy. The former disadvantage causes too long making-time of an animation. The latter disadvantage degrades reality in making realistic animation. To compensate for those disadvantages of the existing approaches, this paper presents an image-based motion rendering algorithm with low computational load and high estimation accuracy. In the proposed approach, an incremental motion rendering algorithm with low computational load is analyzed in the respect of optimal control theory and revised so that its estimation accuracy is enhanced. If we apply this proposed approach to optic motion capture systems, we can obtain additional advantages that motion capture can be performed without any markers, and with low cost in the respect of equipments and spaces.

An Interactive Aerobic Training System Using Vision and Multimedia Technologies

  • Chalidabhongse, Thanarat H.;Noichaiboon, Alongkot
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1191-1194
    • /
    • 2004
  • We describe the development of an interactive aerobic training system using vision-based motion capture and multimedia technology. Unlike the traditional one-way aerobic training on TV, the proposed system allows the virtual trainer to observe and interact with the user in real-time. The system is composed of a web camera connected to a PC watching the user moves. First, the animated character on the screen makes a move, and then instructs the user to follow its movement. The system applies a robust statistical background subtraction method to extract a silhouette of the moving user from the captured video. Subsequently, principal body parts of the extracted silhouette are located using model-based approach. The motion of these body parts is then analyzed and compared with the motion of the animated character. The system provides audio feedback to the user according to the result of the motion comparison. All the animation and video processing run in real-time on a PC-based system with consumer-type camera. This proposed system is a good example of applying vision algorithms and multimedia technology for intelligent interactive home entertainment systems.

  • PDF

Inertial Motion Sensing-Based Estimation of Ground Reaction Forces during Squat Motion (관성 모션 센싱을 이용한 스쿼트 동작에서의 지면 반력 추정)

  • Min, Seojung;Kim, Jung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.4
    • /
    • pp.377-386
    • /
    • 2015
  • Joint force/torque estimation by inverse dynamics is a traditional tool in biomechanical studies. Conventionally for this, kinematic data of human body is obtained by motion capture cameras, of which the bulkiness and occlusion problem make it hard to capture a broad range of movement. As an alternative, inertial motion sensing using cheap and small inertial sensors has been studied recently. In this research, the performance of inertial motion sensing especially to calculate inverse dynamics is studied. Kinematic data from inertial motion sensors is used to calculate ground reaction force (GRF), which is compared to the force plate readings (ground truth) and additionally to the estimation result from optical method. The GRF estimation result showed high correlation and low normalized RMSE(R=0.93, normalized RMSE<0.02 of body weight), which performed even better than conventional optical method. This result guarantees enough accuracy of inertial motion sensing to be used in inverse dynamics analysis.