• Title/Summary/Keyword: Motion Capture Data

Search Result 280, Processing Time 0.023 seconds

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

The Effects of the Stirrup Length Fitted to the Rider's Lower Limb Length on the Riding Posture for Less Skilled Riders during Trot in Equestrian (승마 속보 시 미숙련자에게 적용한 하지장 비율 74.04% 등자길이 피팅의 기승자세 효과)

  • Hyun, Seung-Hyun;Ryew, Che-Cheong
    • Korean Journal of Applied Biomechanics
    • /
    • v.25 no.3
    • /
    • pp.335-342
    • /
    • 2015
  • Objective : The purposes of this study was to analyze the effects of the stirrup length fitted to the rider's lower limb length and it's impact on less skilled riders during trot in equestrian events. Methods : Participants selected as subjects consisted of less skilled riders(n=5, mean age: $40.02{\pm}10.75yrs$, mean heights: $169.77{\pm}2.08cm$, mean body weights: $67.65{\pm}7.76kg$, lower limb lengths: $97.26{\pm}2.35cm$, mean horse heights: $164.00{\pm}5.74cm$ with 2 type of stirrups lengths(lower limb ratio 74.04%, and 79.18%) during trot. The variables analyzed consisted of the displacement for Y axis and Z axis(head, and center of mass[COM]) with asymmetric index, trunk front-rear angle(consistency index), lower limb joint(Right hip, knee, and ankle), and average vertical forces of horse rider during 1 stride in trot. The 4 camcorder(HDR-HC7/HDV 1080i, Spony Corp, Japan) was used to capture horse riding motion at a rate of 60 frames/sec. Raw data was collected from Kwon3D XP motion analysis package ver 4.0 program(Visol, Korea) during trot. Results : The movements and asymmetric index didn't show significant difference at head and COM, Also, 74.04% stirrups lengths in trunk tilting angle showed significant difference with higher consistency than that of 79.18% stirrups lengths. Hip and knee joint angle showed significant difference with more extended posture than that of 74.04% stirrups lengths during trot. Ankle angle of 79.18% stirrups length showed more plantarflexion than that of 74.04% stirrups lengths. Average vertical force of rider showed significant difference with higher force at 79.18% stirrups lengths than that of 74.04% stirrups lengths during stance phase. Conclusion : When considering the above, 74.04% stirrups length could be effective in impulse reduction with consistent posture in rather less skilled horse riders.

The Effect of Shoe Heel Types and Gait Speeds on Knee Joint Angle in Healthy Young Women - A Preliminary Study

  • Chhoeum, Vantha;Wang, Changwon;Jang, Seungwan;Min, Se Dong;Kim, Young;Choi, Min-Hyung
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.41-50
    • /
    • 2020
  • The consequences of wearing high heels can be different according to the heel height, gait speed, shoe design, heel base area, and shoe size. This study aimed to focus on the knee extension and flexion range of motion (ROM) during gait, which were challenged by wearing five different shoe heel types and two different self-selected gait speeds (comfortable and fast) as experimental conditions. Measurement standards of knee extension and flexion ROM were individually calibrated at the time of heel strike, mid-stance, toe-off, and stance phase based on the 2-minute video recordings of each gait condition. Seven healthy young women (20.7 ± 0.8 years) participated and they were asked to walk on a treadmill wearing the five given shoes at a self-selected comfortable speed (average of 2.4 ± 0.3 km/h) and a fast speed (average of 5.1 ± 0.2 km/h) in a random order. All of the shoes were in size 23.5 cm. Three of the given shoes were 9.0 cm in height, the other two were flat shoes and sneakers. A motion capture software (Kinovea 0.8.27) was used to measure the kinematic data; changes in the knee angles during each gait. During fast speed gait, the knee extension angles at heel strike and mid-stance were significantly decreased in all of the 3 high heels (p<0.05). The results revealed that fast gait speed causes knee flexion angle to significantly increase at toe-off in all five types of shoes. However, there was a significant difference in both the knee flexion and extension angles when the gait in stiletto heels and flat shoes were compared in fast gait condition (p<0.05). This showed that walking fast in high heels leads to abnormal knee ROM and thus can cause damages to the knee joints. The findings in this preliminary study can be a basis for future studies on the kinematic changes in the lower extremity during gait and for the analysis of causes and preventive methods for musculoskeletal injuries related to wearing high heels.

A Study of the Stability on Standing posture of Single leg in Yoga practicing (요가 수련을 통한 한발서기 자세의 안정화 연구)

  • Yoo, Sil;Hong, Su-yeon;Yoo, Sun-sik
    • 한국체육학회지인문사회과학편
    • /
    • v.55 no.6
    • /
    • pp.749-757
    • /
    • 2016
  • The purpose of this study was to investigate the effect of stability on one leg standing posture in yoga practice. Thirteen women college student who have never done yoga participated in this study. In order to collect data before and after yoga practicing for two years, we were used 3D motion capture system and electromyography. The results were as follows. First, ranges of motions for Y axis of left knee joint and X axis of right ankle joint were significantly different in dancer posture(p<.05), and then X axis of right ankle and Y axis of left ankle joint were significantly different in tree posture of pre and post training. Second, the planar alignment angle of trunk-pelvis was not significant difference in dancer and tree posture. Third, CoM-distances of Y, Z directions were significant difference in the tree posture(p<.05). Fourth, Muscle activities of both rectus abdominis, erector spinae and left quadriceps were significant difference in tree posture(p<.05). These findings suggested that yoga training played important roles in stable postures as results of decreasing rotation ankle joint and movement of CoM and enforcing core muscles. This study provides evidence for effectiveness of the stability on standing posture and can get a great effect on posture correction by means of yoga training. Hereafter, study on alignment angle, which is a measurement of postural stabilization will be needed by future yoga training.

Intelligent Motion Pattern Recognition Algorithm for Abnormal Behavior Detections in Unmanned Stores (무인 점포 사용자 이상행동을 탐지하기 위한 지능형 모션 패턴 인식 알고리즘)

  • Young-june Choi;Ji-young Na;Jun-ho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.73-80
    • /
    • 2023
  • The recent steep increase in the minimum hourly wage has increased the burden of labor costs, and the share of unmanned stores is increasing in the aftermath of COVID-19. As a result, theft crimes targeting unmanned stores are also increasing, and the "Just Walk Out" system is introduced to prevent such thefts, and LiDAR sensors, weight sensors, etc. are used or manually checked through continuous CCTV monitoring. However, the more expensive sensors are used, the higher the initial cost of operating the store and the higher the cost in many ways, and CCTV verification is difficult for managers to monitor around the clock and is limited in use. In this paper, we would like to propose an AI image processing fusion algorithm that can solve these sensors or human-dependent parts and detect customers who perform abnormal behaviors such as theft at low costs that can be used in unmanned stores and provide cloud-based notifications. In addition, this paper verifies the accuracy of each algorithm based on behavior pattern data collected from unmanned stores through motion capture using mediapipe, object detection using YOLO, and fusion algorithm and proves the performance of the convergence algorithm through various scenario designs.

Facial Expression Control of 3D Avatar using Motion Data (모션 데이터를 이용한 3차원 아바타 얼굴 표정 제어)

  • Kim Sung-Ho;Jung Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.383-390
    • /
    • 2004
  • This paper propose a method that controls facial expression of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. And we setup its system. The space of expression is created from about 2400 frames consist of motion captured data of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. But this space is not such a space where one state can go to another state via the straight trajectory between them. We derive trajectories between two states from the captured set of expressions in an approximate manner. First, two states are regarded adjacent if the distance between their distance matrices is below a given threshold. Any two states are considered to have a trajectory between them If there is a sequence of adjacent states between them. It is assumed . that one states goes to another state via the shortest trajectory between them. The shortest trajectories are found by dynamic programming. The space of facial expressions, as the set of distance matrices, is multidimensional. Facial expression of 3D avatar Is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the multidimensional scaling(MDS). To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. As a result of that, users estimate that system is very useful to control facial expression of 3D avatar in real-time.

Development of Gait Event Detection Algorithm using an Accelerometer (가속도계를 이용한 보행 시점 검출 알고리즘 개발)

  • Choi, Jin-Seung;Kang, Dong-Won;Mun, Kyung-Ryoul;Bang, Yun-Hwan;Tack, Gye-Rae
    • Korean Journal of Applied Biomechanics
    • /
    • v.19 no.1
    • /
    • pp.159-166
    • /
    • 2009
  • The purpose of this study was to develop and automatic gait event detection algorithm using single accelerometer which is attached at the top of the shoe. The sinal vector magnitude and anterior-posterior(x-axis) directional component of accelerometer were used to detect heel strike(HS) and toe off(TO), respectively. To evaluate proposed algorithm, gait event timing was compared with that by force plate and kinematic data. In experiment, 7 subjects performed 10 trials level walking with 3 different walking conditions such as fast, preferred & slow walking. An accelerometer, force plate and 3D motion capture system were used during experiment. Gait event by force plate was used as reference timing. Results showed that gait event by accelerometer is similar to that by force plate. The distribution of differences were spread about $22.33{\pm}17.45m$ for HS and $26.82{\pm}14.78m$ for To and most error was existed consistently prior to 20ms. The difference between gait event by kinematic data and developed algorithm was small. Thus it can be concluded that developed algorithm can be used during outdoor walking experiment. Further study is necessary to extract gait spatial variables by removing gravity factor.

FBX Format Animation Generation System Combined with Joint Estimation Network using RGB Images (RGB 이미지를 이용한 관절 추정 네트워크와 결합된 FBX 형식 애니메이션 생성 시스템)

  • Lee, Yujin;Kim, Sangjoon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.519-532
    • /
    • 2021
  • Recently, in various fields such as games, movies, and animation, content that uses motion capture to build body models and create characters to express in 3D space is increasing. Studies are underway to generate animations using RGB-D cameras to compensate for problems such as the cost of cinematography in how to place joints by attaching markers, but the problem of pose estimation accuracy or equipment cost still exists. Therefore, in this paper, we propose a system that inputs RGB images into a joint estimation network and converts the results into 3D data to create FBX format animations in order to reduce the equipment cost required for animation creation and increase joint estimation accuracy. First, the two-dimensional joint is estimated for the RGB image, and the three-dimensional coordinates of the joint are estimated using this value. The result is converted to a quaternion, rotated, and an animation in FBX format is created. To measure the accuracy of the proposed method, the system operation was verified by comparing the error between the animation generated based on the 3D position of the marker by attaching a marker to the body and the animation generated by the proposed system.

A Study on Korean Speech Animation Generation Employing Deep Learning (딥러닝을 활용한 한국어 스피치 애니메이션 생성에 관한 고찰)

  • Suk Chan Kang;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.461-470
    • /
    • 2023
  • While speech animation generation employing deep learning has been actively researched for English, there has been no prior work for Korean. Given the fact, this paper for the very first time employs supervised deep learning to generate Korean speech animation. By doing so, we find out the significant effect of deep learning being able to make speech animation research come down to speech recognition research which is the predominating technique. Also, we study the way to make best use of the effect for Korean speech animation generation. The effect can contribute to efficiently and efficaciously revitalizing the recently inactive Korean speech animation research, by clarifying the top priority research target. This paper performs this process: (i) it chooses blendshape animation technique, (ii) implements the deep-learning model in the master-servant pipeline of the automatic speech recognition (ASR) module and the facial action coding (FAC) module, (iii) makes Korean speech facial motion capture dataset, (iv) prepares two comparison deep learning models (one model adopts the English ASR module, the other model adopts the Korean ASR module, however both models adopt the same basic structure for their FAC modules), and (v) train the FAC modules of both models dependently on their ASR modules. The user study demonstrates that the model which adopts the Korean ASR module and dependently trains its FAC module (getting 4.2/5.0 points) generates decisively much more natural Korean speech animations than the model which adopts the English ASR module and dependently trains its FAC module (getting 2.7/5.0 points). The result confirms the aforementioned effect showing that the quality of the Korean speech animation comes down to the accuracy of Korean ASR.

Attitude Confidence and User Resistance for Purchasing Wearable Devices on Virtual Reality: Based on Virtual Reality Headgears (가상현실 웨어러블 기기의 구매 촉진을 위한 태도 자신감과 사용자 저항 태도: 가상현실 헤드기어를 중심으로)

  • Sohn, Bong-Jin;Park, Da-Sul;Choi, Jaewon
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.165-183
    • /
    • 2016
  • Over the past decade, there has been a rapid diffusion of technological devices and a rising number of various devices, resulting in an escalation of virtual reality technology. Technological market has rapidly been changed from smartphone to wearable devices based on virtual reality. Virtual reality can make users feel real situation through sensing interaction, voice, motion capture and so on. Facebook.com, Google, Samsung, LG, Sony and so on have investigated developing platform of virtual reality. the pricing of virtual reality devices also had decreased into 30% from their launched period. Thus market infrastructure in virtual reality have rapidly been developed to crease marketplace. However, most consumers recognize that virtual reality is not ease to purchase or use. That could not lead consumers to positive attitude for devices and purchase the related devices in the early market. Through previous studies related to virtual reality, there are few studies focusing on why the devices for virtual reality stayed in early stage in adoption & diffusion context in the market. Almost previous studies considered the reasons of hard adoption for innovative products in the viewpoints of Typology of Innovation Resistance, MIR(Management of Innovation Resistant), UTAUT & UTAUT2. However, product-based antecedents also important to increase user intention to purchase and use products in the technological market. In this study, we focus on user acceptance and resistance for increasing purchase and usage promotions of wearable devices related to virtual reality based on headgear products like Galaxy Gear. Especially, we added a variables like attitude confidence as a dimension for user resistance. The research questions of this study are follows. First, how attitude confidence and innovativeness resistance affect user intention to use? Second, What factors related to content and brand contexts can affect user intention to use? This research collected data from the participants who have experiences using virtual rality headgears aged between 20s to 50s located in South Korea. In order to collect data, this study used a pilot test and through making face-to-face interviews on three specialists, face validity and content validity were evaluated for the questionnaire validity. Cleansing the data, we dropped some outliers and data of irrelevant papers. Totally, 156 responses were used for testing the suggested hypotheses. Through collecting data, demographics and the relationships among variables were analyzed through conducting structural equation modeling by PLS. The data showed that the sex of respondents who have experience using social commerce sites (male=86(55.1%), female=70(44.9%). The ages of respondents are mostly from 20s (74.4%) to 30s (16.7%). 126 respondents (80.8%) have used virtual reality devices. The results of our model estimation are as follows. With the exception of Hypothesis 1 and 7, which deals with the two relationships between brand awareness to attitude confidence, and quality of content to perceived enjoyment, all of our hypotheses were supported. In compliance with our hypotheses, perceived ease of use (H2) and use innovativeness (H3) were supported with its positively influence for the attitude confidence. This finding indicates that the more ease of use and innovativeness for devices increased, the more users' attitude confidence increased. Perceived price (H4), enjoyment (H5), Quantity of contents (H6) significantly increase user resistance. However, perceived price positively affect user innovativeness resistance meanwhile perceived enjoyment and quantity of contents negatively affect user innovativeness resistance. In addition, aesthetic exterior (H6) was also positively associated with perceived price (p<0.01). Also projection quality (H8) can increase perceived enjoyment (p<0.05). Finally, attitude confidence (H10) increased user intention to use virtual reality devices. however user resistance (H11) negatively affect user intention to use virtual reality devices. The findings of this study show that attitude confidence and user innovativeness resistance differently influence customer intention for using virtual reality devices. There are two distinct characteristic of attitude confidence: perceived ease of use and user innovativeness. This study identified the antecedents of different roles of perceived price (aesthetic exterior) and perceived enjoyment (quality of contents & projection quality). The findings indicated that brand awareness and quality of contents for virtual reality is not formed within virtual reality market yet. Therefore, firms should developed brand awareness for their product in the virtual market to increase market share.