• Title/Summary/Keyword: Movement Recognition

Search Result 493, Processing Time 0.024 seconds

Child Care Teachers' Playfulness and Teaching Intention: Focusing on the Mediating Effects of Recognition of Music and Movement Activities (보육교사의 놀이성과 음률지도 적극성: 음률활동에 대한 인식의 매개효과를 중심으로)

  • Lee, Ina;Lee, Wanjeong
    • Korean Journal of Child Studies
    • /
    • v.37 no.2
    • /
    • pp.1-11
    • /
    • 2016
  • Objective: This study examined how child care teachers' playfulness and recognition of music and movement relate to their teaching intention of music and movement. Methods: Participants were 200 child care teachers in Seoul, Incheon and Gyeonggi areas. The data were analyzed for descriptive statistics, pearson's correlation analysis, hierarchical multiple regression analysis, and sobel test. Results: The main results were as follows: First, child care teachers' playfulness, teaching intention of music and movement and their recognition of music and movement were positively correlated. Second, child care teachers' playfulness influenced on their teaching intention of music and movement. Finally, teachers' recognition of music and movement mediated the relationship between teachers' playfulness and their teaching intention of music and movement. Conclusion: This study showed that teachers' playfulness influenced on their positive recognition of music and movement activities, which was the variable that caused mediation in the teachers' playfulness and their teaching attention.

Kinect Sensor- based LMA Motion Recognition Model Development

  • Hong, Sung Hee
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.367-372
    • /
    • 2021
  • The purpose of this study is to suggest that the movement expression activity of intellectually disabled people is effective in the learning process of LMA motion recognition based on Kinect sensor. We performed an ICT motion recognition games for intellectually disabled based on movement learning of LMA. The characteristics of the movement through Laban's LMA include the change of time in which movement occurs through the human body that recognizes space and the tension or relaxation of emotion expression. The design and implementation of the motion recognition model will be described, and the possibility of using the proposed motion recognition model is verified through a simple experiment. As a result of the experiment, 24 movement expression activities conducted through 10 learning sessions of 5 participants showed a concordance rate of 53.4% or more of the total average. Learning motion games that appear in response to changes in motion had a good effect on positive learning emotions. As a result of study, learning motion games that appear in response to changes in motion had a good effect on positive learning emotions

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.

Martial Arts Moves Recognition Method Based on Visual Image

  • Husheng, Zhou
    • Journal of Information Processing Systems
    • /
    • v.18 no.6
    • /
    • pp.813-821
    • /
    • 2022
  • Intelligent monitoring, life entertainment, medical rehabilitation, and other fields are only a few examples where visual image technology is becoming increasingly sophisticated and playing a significant role. Recognizing Wushu, or martial arts, movements through the use of visual image technology helps promote and develop Wushu. In order to segment and extract the signals of Wushu movements, this study analyzes the denoising of the original data using the wavelet transform and provides a sliding window data segmentation technique. Wushu movement The Wushu movement recognition model is built based on the hidden Markov model (HMM). The HMM model is trained and taught with the help of the Baum-Welch algorithm, which is then enhanced using the frequency weighted training approach and the mean training method. To identify the dynamic Wushu movement, the Viterbi algorithm is used to determine the probability of the optimal state sequence for each Wushu movement model. In light of the foregoing, an HMM-based martial arts movements recognition model is developed. The recognition accuracy of the HMM model increases to 99.60% when the number of samples is 4,000, which is greater than the accuracy of the SVM (by 0.94%), the CNN (by 1.12%), and the BP (by 1.14%). From what has been discussed, it appears that the suggested system for detecting martial arts acts is trustworthy and effective, and that it may contribute to the growth of martial arts.

Creativity Theory of Body Movement and Analysis of Creativity Factor (신체움직임의 창의성 이론과 요인분석)

  • Ahn, Byoung-Soon
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.672-679
    • /
    • 2013
  • Creativity is the thinking ability and the expression of new image by imagination as a problem recognition and way of solution. This study aims to search for the creativity theory of body movement and to analyze the creativity factor. According to the study, the creativity of body movement needs four steps: movement awareness, movement design, movement discovery and movement use. The use of new image through self-perception and self concept brings about a creative improvement in the problem recognition and its resolution function. In conclusion, the creativity of body movement means the infinity of body movement as 'the third energy' and 'the flexibility of flow' by interaction.

Speech Activity Detection using Lip Movement Image Signals (입술 움직임 영상 선호를 이용한 음성 구간 검출)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.289-297
    • /
    • 2010
  • In this paper, A method to prevent the external acoustic noise from being misrecognized as the speech recognition object is presented in the speech activity detection process for the speech recognition. Also this paper confirmed besides the acoustic energy to the lip movement image signals. First of all, the successive images are obtained through the image camera for personal computer and the lip movement whether or not is discriminated. The next, the lip movement image signal data is stored in the shared memory and shares with the speech recognition process. In the mean time, the acoustic energy whether or not by the utterance of a speaker is verified by confirming data stored in the shared memory in the speech activity detection process which is the preprocess phase of the speech recognition. Finally, as a experimental result of linking the speech recognition processor and the image processor, it is confirmed to be normal progression to the output of the speech recognition result if face to the image camera and speak. On the other hand, it is confirmed not to the output the result of the speech recognition if does not face to the image camera and speak. Also, the initial feature values under off-line are replaced by them. Similarly, the initial template image captured while off-line is replaced with a template image captured under on-line, so the discrimination of the lip movement image tracking is raised. An image processing test bed was implemented to confirm the lip movement image tracking process visually and to analyze the related parameters on a real-time basis. As a result of linking the speech and image processing system, the interworking rate shows 99.3% in the various illumination environments.

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

A Study on the International Recognition of the COVID-19 Vaccination Certificates (코로나19 예방접종증명서의 국제적 인정에 관한 연구)

  • Jang, Su Yun;Kwon, Hun Yeong
    • Journal of Information Technology Services
    • /
    • v.20 no.6
    • /
    • pp.45-62
    • /
    • 2021
  • After the COVID-19 outbreak in 2019, the spread of COVID-19 has not been easily caught despite preventive measures in each country. The spread of COVID-19 has hit the world, especially in the economic and tourism sectors. Countries around the world are easing restrictions on the movement of vaccinated people in preparation for the post-corona era. Under the name of "Vaccine Passport," "Vaccination Certificate," and "Digital Health Pass," vaccination measures are being implemented to allow vaccination recipients to use multi-use facilities. However, there is no international agreement on the movement of countries, and each country has its own immigration policy. In order to return to pre-corona daily life, global agreements must be reached from the movement of vaccinated people between countries, and standards and implementation methods must be determined. This study focuses on the implementation and utilization of vaccination certificates suitable for the COVID-19 era. We will look at the spread of COVID-19 and its international response policies. In the case of COVID-19, we will investigate why vaccination certificate installation should be standardized and how far the current standardization has been discussed, and discuss the characteristics of vaccination certificate installation and considerations. In order for the immunization certificate discussed in the previous chapter to be recognized internationally, institutional and technical considerations are identified and security factors that may occur in each implementation are also presented. Finally, the international recognition case of vaccination certificate is discussed, and the method of installation and utilization of vaccination certificate is proposed. This paper can be used as a policy because of its timeliness in studying the standards of vaccination certificates and considerations for international recognition to restore movement between countries in the spread of COVID-19. In addition, if other infectious diseases occur in the future or similar cases where movement between countries is restricted, it can be used as a reference to support the movement of verified people.

A Consecutive Motion and Situation Recognition Mechanism to Detect a Vulnerable Condition Based on Android Smartphone

  • Choi, Hoan-Suk;Lee, Gyu Myoung;Rhee, Woo-Seop
    • International Journal of Contents
    • /
    • v.16 no.3
    • /
    • pp.1-17
    • /
    • 2020
  • Human motion recognition is essential for user-centric services such as surveillance-based security, elderly condition monitoring, exercise tracking, daily calories expend analysis, etc. It is typically based on the movement data analysis such as the acceleration and angular velocity of a target user. The existing motion recognition studies are only intended to measure the basic information (e.g., user's stride, number of steps, speed) or to recognize single motion (e.g., sitting, running, walking). Thus, a new mechanism is required to identify the transition of single motions for assessing a user's consecutive motion more accurately as well as recognizing the user's body and surrounding situations arising from the motion. Thus, in this paper, we collect the human movement data through Android smartphones in real time for five targeting single motions and propose a mechanism to recognize a consecutive motion including transitions among various motions and an occurred situation, with the state transition model to check if a vulnerable (life-threatening) condition, especially for the elderly, has occurred or not. Through implementation and experiments, we demonstrate that the proposed mechanism recognizes a consecutive motion and a user's situation accurately and quickly. As a result of the recognition experiment about mix sequence likened to daily motion, the proposed adoptive weighting method showed 4% (Holding time=15 sec), 88% (30 sec), 6.5% (60 sec) improvements compared to static method.