• Title/Summary/Keyword: Facial motion

Search Result 157, Processing Time 0.029 seconds

Virtual Human Authoring ToolKit for a Senior Citizen Living Alone (독거노인용 가상 휴먼 제작 툴킷)

  • Shin, Eunji;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.9
    • /
    • pp.1245-1248
    • /
    • 2020
  • Elderly people living alone need smart care for independent living. Recent advances in artificial intelligence have allowed for easier interaction by a computer-controlled virtual human. This technology can realize services such as medicine intake guide for the elderly living alone. In this paper, we suggest an intelligent virtual human and present our virtual human toolkit for controlling virtual humans for a senior citizen living alone. To make the virtual human motion, we suggest our authoring toolkit to map gestures, emotions, voices of virtual humans. The toolkit configured to create virtual human interactions allows the response of a suitable virtual human with facial expressions, gestures, and voice.

Late reconstruction of post-traumatic enophthalmos and hypoglobus using three-dimensional implants: a case series

  • Choi, Jae Hyeok;Baek, Wooyeol
    • Archives of Craniofacial Surgery
    • /
    • v.23 no.5
    • /
    • pp.232-236
    • /
    • 2022
  • Post-traumatic enophthalmos and hypoglobus are common sequelae of facial bone fractures, even after reduction surgery. They are associated with functional and esthetic issues, which may lower the quality of life. These deformities frequently present late, and adequate correction is difficult. We report three cases of late inferior orbital rim reconstructions with three-dimensional printed implants to help resolve these problems. The average duration between the traumatic event and surgery was 3 years and 4 months. One patient was treated with a completely absorbable implant and exhibited satisfactory results until the implant started to biodegrade at 1 year and 9 months after surgery. Two patients were treated with a permanent implant and demonstrated satisfactory results. However, longer follow-up periods were needed. There were no complications such as infection, diplopia, or restriction of ocular motion and the patients were satisfied with the esthetic results.

The Effect of Cognitive Movement Therapy on Emotional Rehabilitation for Children with Affective and Behavioral Disorder Using Emotional Expression and Facial Image Analysis (감정표현 표정의 영상분석에 의한 인지동작치료가 정서·행동장애아 감성재활에 미치는 영향)

  • Byun, In-Kyung;Lee, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.12
    • /
    • pp.327-345
    • /
    • 2016
  • The purpose of this study was to carry out cognitive movement therapy program for children with affective and behavioral disorder based on neuro science, psychology, motor learning, muscle physiology, biomechanics, human motion analysis, movement control and to quantify characteristic of expression and gestures according to change of facial expression by emotional change. We could observe problematic expression of children with affective disorder, and could estimate the efficiency of application of movement therapy program by the face expression change of children with affective disorder. And it could be expected to accumulate data for early detection and therapy process of development disorder applying converged measurement and analytic method for human development by quantification of emotion and behavior therapy analysis, kinematic analysis. Therefore, the result of this study could be extendedly applied to the disabled, the elderly and the sick as well as children.

3D Volumetric Capture-based Dynamic Face Production for Hyper-Realistic Metahuman (극사실적 메타휴먼을 위한 3D 볼류메트릭 캡쳐 기반의 동적 페이스 제작)

  • Oh, Moon-Seok;Han, Gyu-Hoon;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.751-761
    • /
    • 2022
  • With the development of digital graphics technology, the metaverse has become a significant trend in the content market. The demand for technology that generates high-quality 3D (dimension) models is rapidly increasing. Accordingly, various technical attempts are being made to create high-quality 3D virtual humans represented by digital humans. 3D volumetric capture is spotlighted as a technology that can create a 3D manikin faster and more precisely than the existing 3D model creation method. In this study, we try to analyze 3D high-precision facial production technology based on practical cases of the difficulties in content production and technologies applied in volumetric 3D and 4D model creation. Based on the actual model implementation case through 3D volumetric capture, we considered techniques for 3D virtual human face production and producted a new metahuman using a graphics pipeline for an efficient human facial generation.

SURGICAL CORRECTION OF TORTICOLLIS USING BIPOLAR RELEASE AND Z-PLASTY (Bipolar release와 Z-Plasty를 이용한 선천적 사경증의 치험례)

  • Jeong, Jong-Cheol;Kim, Keon-Jung;Lee, Jeong-Sam;Min, Heung-Ki;Choi, Jae-Sun
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.18 no.3
    • /
    • pp.388-395
    • /
    • 1996
  • Congenital muscular torticollis(CMT) is a disorder characterized by shortening of at least one of the cervical muscles and tilting of the head to opposite side. The most commonly affected muscle is the sternocleidomastoid muscle. Pathogenesis and etiology of congenital muscular torticollis were not clearly identified, but considered as fetal malposition, birth trauma, vascular accident, heredity, infection and CNS pathology. Untreated congenital muscular torticollis often causes facial asymmetry and This is the rasult of tensional rotation of the face toward affected side. So early treatment may prevent facial and neck asymmetry and limitation of neck movement. There are many treatment methods in CMT, including conservative and operative method, but presently Bipolar release and Z-Plasty of SCM muscle has been introduced when the conservative treatment had failed. The benefits of this method are to preservation of the normal Neck V-contour and improvement of the neck motion. We treated CMT using Bipolar release and Z-plasty in two patients. After that the patients improved on the range of neck motion and maintained the normal V-conture of the neck, so we report two cases of CMT with literatures.

  • PDF

Virtual Reality for Dental Implant Surgical Education (가상현실을 이용한 치과 임플란트 수술 교육)

  • Moon, Seong-Yong;Choi, Bong-Du;Moon, Young-Lae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.12
    • /
    • pp.169-174
    • /
    • 2016
  • In this study, we evaluated the virtual reality model for dental implant surgery and discussed about the method to make the surgical environment for virtual reality with practical patient data. The anatomical model for patient face was fabricated by facial and oral scan data based on CT data. The simulation scenario was composed step by step fashion with Unity3D. From incision and sinus bone graft procedure which is needed to this patient model to implant installation and bone graft was included in this scenario. We used the HMD and leap motion for immersiveness and feeling of real operation. Twenty training doctor was attended this simulation study, and surveyed their satisfactory results by questionnaire. Implant surgery education program was showed the possibilities of educational tool for dental students and training doctors. Virtual reality for surgical education with HMD and leap motion had advantages, in terms of cheap prcie, easy access.

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Face and Hand Tracking Algorithm for Sign Language Recognition (수화 인식을 위한 얼굴과 손 추적 알고리즘)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.11C
    • /
    • pp.1071-1076
    • /
    • 2006
  • In this paper, we develop face and hand tracking for sign language recognition system. The system is divided into two stages; the initial and tracking stages. In initial stage, we use the skin feature to localize face and hands of signer. The ellipse model on CbCr space is constructed and used to detect skin color. After the skin regions have been segmented, face and hand blobs are defined by using size and facial feature with the assumption that the movement of face is less than that of hands in this signing scenario. In tracking stage, the motion estimation is applied only hand blobs, in which first and second derivative are used to compute the position of prediction of hands. We observed that there are errors in the value of tracking position between two consecutive frames in which velocity has changed abruptly. To improve the tracking performance, our proposed algorithm compensates the error of tracking position by using adaptive search area to re-compute the hand blobs. The experimental results indicate that our proposed method is able to decrease the prediction error up to 96.87% with negligible increase in computational complexity of up to 4%.

Posture features and emotion predictive models for affective postures recognition (감정 자세 인식을 위한 자세특징과 감정예측 모델)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.83-94
    • /
    • 2011
  • Main researching issue in affective computing is to give a machine the ability to recognize the emotion of a person and to react it properly. Efforts in that direction have mainly focused on facial and oral cues to get emotions. Postures have been recently considered as well. This paper aims to discriminate emotions posture by identifying and measuring the saliency of posture features that play a role in affective expression. To do so, affective postures from human subjects are first collected using a motion capture system, then emotional features in posture are described with spatial ones. Through standard statistical techniques, we verified that there is a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. Discriminant Analysis are used to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 6 basic emotional states. The evaluation of proposed features and models are performed using a correlation between actor-observer's postures set. Quantitative experimental results show that proposed set of features discriminates well between emotions, and also that built predictive models perform well.

Development of a Serious Game using EEG Monitor and Kinect (뇌파측정기와 키넥트를 이용한 기능성 게임 개발)

  • Jung, Sang-Hyub;Han, Seung-Wan;Kim, Hyo-Chan;Kim, Ki-Nam;Song, Min-Sun;Lee, Kang-Hee
    • Journal of Korea Game Society
    • /
    • v.15 no.4
    • /
    • pp.189-198
    • /
    • 2015
  • This paper is about a serious game controlled by EEG and motion capture. We developed our game for 2 users competitive and its method is as follows. One player uses a controlling interface using EEG signals based on the premise that the player's facial movements are a depiction of the player's emotion and intensity throughout the game play. The other player uses a controlling interface using kinect's motion capture technology which captures the player's vertical and lateral movements as well as state of running. The game shows the first player's EEG as a real-time graphic along the map on the game screen. The player will then be able to pace himself based on these visualization graphics of his brain activities. This results in higher concentration for the player throughout the game for a better score in the game. In addition, the second player will be able to improve his physical abilities since the game action is based on real movements from the player.