• Title/Summary/Keyword: Motion Capture Data

Search Result 280, Processing Time 0.024 seconds

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology -Based on <Beast Town> as an Example-

  • YuanZi Sang;KiHong Kim;JuneSok Lee;JiChu Tang;GaoHe Zhang;ZhengRan Liu;QianRu Liu;ShiJie Sun;YuTing Wang;KaiXing Wang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.106-123
    • /
    • 2024
  • This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.

Distance Measuring Method for Motion Capture Animation (모션캡쳐 애니메이션을 위한 거리 측정방법)

  • Lee, Heei-Man;Seo, Jeong-Man;Jung, Suun-Key
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.129-138
    • /
    • 2002
  • In this paper, a distance measuring algorithm for motion capture using color stereo camera is proposed. The color markers attached on articulations of an actor are captured by stereo color video cameras, and color region which has the same color of the marker's color in the captured images is separated from the other colors by finding dominant wavelength of colors. Color data in RGB (red, green, blue) color space is converted into CIE (Commission Internationale del'Eclairage) color space for the purpose of calculating wavelength. The dominant wavelength is selected from histogram of the neighbor wavelengths. The motion of the character in the cyber space is controlled by a program using the distance information of the moving markers.

A Study on the Creative Process of Creative Ballet <Youth> through Motion Capture Technology (모션캡처 활용을 통한 창작발레<청춘>창작과정연구)

  • Chang, So-Jung; Park, Arum
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.809-814
    • /
    • 2023
  • Currently, there is a lack of research that directly applies and integrates science and technology in the field of dance and translates it into creative work. In this study, the researcher applied motion capture to creative dance performance 'Youth' and described the process of incorporating motion capture into scenes for the performance. The research method involved utilizing practice-based research, which derives new knowledge and meaning from creative outcomes through the analysis of phenomena and experiences generated on-site. The creative ballet performance "<Youth>" consists of a total of 4 scenes, and the motion-captured video in these scenes serves as the highlight moments. It visually represents the image of a past ballerina while embodying the meaning of a scene that is both the 'past me' and the 'dream of the present.' The use of motion capture enhances the visual representation of the scenes and plays a role in increasing the audience's immersion. The dance field needs to become familiar with collaborating with scientific and technological advancements like motion capture to digitize intangible assets. It is essential to engage in experimental endeavors and continue training for such collaborations. Furthermore, through collaboration, the ongoing research should extend the scope of movement through digitized processes, performances, and performance records. This will continually confer value and meaning to the field of dance

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

Feasibility Study of Gait Recognition Using Points in Three-Dimensional Space

  • Kim, Minsung;Kim, Mingon;Park, Sumin;Kwon, Junghoon;Park, Jaeheung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.124-132
    • /
    • 2013
  • This study investigated the feasibility of gait recognition using points on the body in three-dimensional (3D) space based on comparisons of four different feature vectors. To obtain the point trajectories on the body in 3D, gait motion data were captured from 10 participants using a 3D motion capture system, and four shoes with different heel heights were used to study the effects of heel height on gait recognition. Finally, the recognition rates were compared using four methods and different heel heights.

An Introduction of Myo Armband and Its Comparison with Motion Capture Systems

  • Cho, Junghun;Lee, Jang Hyung;Kim, Kwang Gi
    • Journal of Multimedia Information System
    • /
    • v.5 no.2
    • /
    • pp.115-120
    • /
    • 2018
  • Recently, ways for accurately measuring the three dimensional movements of hand are actively researched so as to utilize the measurement data for therapeutic and rehabilitation programs. This research paper aims to introduce a product called Myo Armband, a wearable device comprised of a 3-axis accelerometer, a 3 axis gyroscope, and electromyographic sensors. We compare Armband's performance with that of the Motion Capture System, which is known as a device for providing fairly accurate measurements for angular movements of objects. Dart throwing and wrist winding motions comprised movement scenarios. This paper also discusses one of Armband's advantages - portability, and suggests its potential as a substitute for previously used devices. Decent levels of measurement accuracy were obtained which were comparable to that of three dimensional measurement device.

A Study on the Development of Digital Space Design Process Using User′s Motion Data (사용자 모션데이터를 활용한 디지털 공간디자인 프로세스 개발에 관한 연구)

  • 안신욱;박혜경
    • Korean Institute of Interior Design Journal
    • /
    • v.13 no.3
    • /
    • pp.187-196
    • /
    • 2004
  • The purpose of this study is to develope'a digital space design process using user's motion data' through a theoretical and experimental study. In the progress of developing a developing of design process, this study was concentrated on searching a digital method applying user's interactive reflections. As introducing a concept of space form being generated by user's experiences, we proposed'a digital design process using user's motion data'. In the experimental stage, user's motion data were extracted and transferred as digital information by user behavior analysis, optical motion capture system, immersive VR system, 3D softwares com computer programming. As the result of this study, another useful digital design process was embodied by building up a digital form-transforming method using 3D softwares providing internal algorithm. This study would be meaningful in terms of attempting a creative and interactive digital space design method, avoiding dehumanization of existing ones through the theoretical study and the experimental approach.

Training Avatars Animated with Human Motion Data (인간 동작 데이타로 애니메이션되는 아바타의 학습)

  • Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.231-241
    • /
    • 2006
  • Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method for training avatar behaviors from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on machine learning technique, called Q-teaming, our training method allows the avatar to learn how to act in any given situation through trial-and-error interactions with a dynamic environment. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

Measurement of grasping reach by three-dimensional motion capture (3차원 동작측정 방법에 의한 인체 파악한계 측정)

  • 박재희;고봉기;김진호
    • Proceedings of the ESK Conference
    • /
    • 1997.04a
    • /
    • pp.85-89
    • /
    • 1997
  • We used a three-dimensional motion capture method to measure the grasping reach of Korean. This method was applied well to the grasping reach measurement with low measurement error and high efficiency. We measured the grasping reach of 29 males and 21 females at the different height from seat reference level; -10, 0, 30, 60, and 90cm. The grasping reach data were summarized at each 15 .deg. in polar corrdinates to compare with the former researches. If the number of subjects increases in the supplement research, the grasping reach data will be used in the ergonomic design of the driver's cabin or workstations in industry.

  • PDF