• Title/Summary/Keyword: Motion capture analysis

Search Result 213, Processing Time 0.032 seconds

Evaluation of Accuracy and Inaccuracy of Depth Sensor based Kinect System for Motion Analysis in Specific Rotational Movement for Balance Rehabilitation Training (균형 재활 훈련을 위한 특정 회전 움직임에서 피검자 동작 분석을 위한 깊이 센서 기반 키넥트 시스템의 정확성 및 부정확성 평가)

  • Kim, ChoongYeon;Jung, HoHyun;Jeon, Seong-Cheol;Jang, Kyung Bae;Chun, Keyoung Jin
    • Journal of Biomedical Engineering Research
    • /
    • v.36 no.5
    • /
    • pp.228-234
    • /
    • 2015
  • The balance ability significantly decreased in the elderly because of deterioration of the neural musculature regulatory mechanisms. Several studies have investigated methods of improving balance ability using real-time systems, but it is limited by the expensive test equipment and specialized resources. Recently, Kinect system based on depth data has been applied to address these limitations. Little information about accuracy/inaccuracy of Kinect system is, however, available, particular in motion analysis for evaluation of effectiveness in rehabilitation training. Therefore, the aim of the current study was to evaluate accuracy/inaccuracy of Kinect system in specific rotational movement for balance rehabilitation training. Six healthy male adults with no musculoskeletal disorder were selected to participate in the experiment. Movements of the participants were induced by controlling the base plane of the balance training equipment in directions of AP (anterior-posterior), ML (medial-lateral), right and left diagonal direction. The dynamic motions of the subjects were measured using two Kinect depth sensor systems and a three-dimensional motion capture system with eight infrared cameras for comparative evaluation. The results of the error rate for hip and knee joint alteration of Kinect system comparison with infrared camera based motion capture system occurred smaller values in the ML direction (Hip joint: 10.9~57.3%, Knee joint: 26.0~74.8%). Therefore, the accuracy of Kinect system for measuring balance rehabilitation traning could improve by using adapted algorithm which is based on hip joint movement in medial-lateral direction.

Implementation of Human Motion Following Robot through Wireless Communication Interface

  • Choi, Hyoukryeol;Jung, Kwangmok;Ryew, SungMoo;Kim, Hunmo;Jeon, Jaewook;Nam, Jaedo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.36.3-36
    • /
    • 2002
  • $\textbullet$ Motion capture system $\textbullet$ Exoskeleton mechanism $\textbullet$ Kinematics analysis $\textbullet$ Man-machine Interface $\textbullet$ Wireless communication $\textbullet$ Control algorithm

  • PDF

Feasibility Study of Gait Recognition Using Points in Three-Dimensional Space

  • Kim, Minsung;Kim, Mingon;Park, Sumin;Kwon, Junghoon;Park, Jaeheung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.124-132
    • /
    • 2013
  • This study investigated the feasibility of gait recognition using points on the body in three-dimensional (3D) space based on comparisons of four different feature vectors. To obtain the point trajectories on the body in 3D, gait motion data were captured from 10 participants using a 3D motion capture system, and four shoes with different heel heights were used to study the effects of heel height on gait recognition. Finally, the recognition rates were compared using four methods and different heel heights.

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

The Development of A Basic Golf Swing Analysis Algorithm using a Motion Analysis System (동작분석 시스템을 이용한 골프 스윙 분석 기초 알고리즘 개발)

  • Seo, Jae-Moon;Lee, Hae-Dong;Lee, Sung-Cheol
    • Korean Journal of Applied Biomechanics
    • /
    • v.21 no.1
    • /
    • pp.85-95
    • /
    • 2011
  • Three-dimensional(3D) motion analysis is a useful tool for analyzing sports performance. During the last few decades, advances in motion analysis equipment have enabled us to perform more and more complicated biomechanical analyses. Nevertheless, considering the complexity of biomechanical models and the amount of data recorded from the motion analysis system, subsequent processing of these data is required for event-specific motion analysis. The purpose of this study was to develop a basic golf swing analysis algorithm using a state-of-the-art VICON motion analysis system. The algorithm was developed to facilitate golf swing analysis, with special emphasis on 3D motion analysis and high-speed motion capture, which are not easily available from typical video camera systems. Furthermore, the developed algorithm generates golf swing-specific kinematic and kinetic variables that can easily be used by golfers and coaches who do not have advanced biomechanical knowledge. We provide a basic algorithm to convert massive and complicated VICON data to common golf swing-related variables. Future development is necessary for more practical and efficient golf swing analysis.

Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face (동작 및 효정 동시 포착을 위한 데이터 기반 표정 복원에 관한 연구)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.3
    • /
    • pp.9-16
    • /
    • 2012
  • In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.

Parameter Analysis of Muscle Models for Arm Movement (팔 근육운동의 파라미터 분석)

  • Kim, Lae-Kyeom;Tak, Tae-Oh
    • Journal of Industrial Technology
    • /
    • v.28 no.A
    • /
    • pp.155-161
    • /
    • 2008
  • Muscle force prediction in forward dynamic analysis of human motion depends many muscle parameters associated with muscle actuation. This research studies the effects of various parameters of Hill type muscle model using the simple hand raising motion. Motion analysis is carried out using motion capture system, and each muscle force is recorded for comparison with muscle model generated muscle force. Using Hill type muscle model, muscle force for generating the same hand rasing motion was setup adjusting 5 activation parameters. The test showed the importance of activation parameters on the accurate generation of muscle force.

  • PDF

Biomechanical Analysis and Evaluation Technology Using Human Multi-Body Dynamic Model (인체 다물체 동역학 모델을 이용한 생체역학 분석 및 평가 기술)

  • Kim, Yoon-Hyuk;Shin, June-Ho;Khurelbaatar, Tsolmonbaatar
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.31 no.5
    • /
    • pp.494-499
    • /
    • 2011
  • This paper presents the biomechanical analysis and evaluation technology of musculoskeletal system by multi-body human dynamic model and 3-D motion capture data. First, medical image based geometric model and material properties of tissue were used to develop the human dynamic model and 3-D motion capture data based motion analysis techniques were develop to quantify the in-vivo joint kinematics, joint moment, joint force, and muscle force. Walking and push-up motion was investigated using the developed model. The present model and technologies would be useful to apply the biomechanical analysis and evaluation of human activities.

Classification of Behavioral Lexicon and Definition of Upper, Lower Body Structures in Animation Character

  • Hongsik Pak;Suhyeon Choi;Taegu Lee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.103-117
    • /
    • 2023
  • This study focuses on the behavioural lexical classification for extracting animation character actions and the analysis of the character's upper and lower body movements. The behaviour and state of characters in the animation industry are crucial, and digital technology is enhancing the industry's value. However, research on animation motion application technology and behavioural lexical classification is still lacking. Therefore, this study aims to classify the predicates enabling animation motion, differentiate the upper and lower body movements of characters, and apply the behavioural lexicon's motion data. The necessity of this research lies in the potential contributions of advanced character motion technology to various industrial fields, and the use of the behavioural lexicon to elucidate and repurpose character motion. The research method applies a grammatical, behavioural, and semantic predicate classification and behavioural motion analysis based on the character's upper and lower body movements.

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.