• Title/Summary/Keyword: Facial motion

Search Result 157, Processing Time 0.025 seconds

Speech Animation with Multilevel Control (다중 제어 레벨을 갖는 입모양 중심의 표정 생성)

  • Moon, Bo-Hee;Lee, Son-Ou;Wohn, Kwang-yun
    • Korean Journal of Cognitive Science
    • /
    • v.6 no.2
    • /
    • pp.47-79
    • /
    • 1995
  • Since the early age of computer graphics, facial animation has been applied to various fields, and nowadays it has found several novel applications such as virtual reality(for representing virtual agents), teleconference, and man-machine interface.When we want to apply facial animation to the system with multiple participants connected via network, it is hard to animate facial expression as we desire in real-time because of the size of information to maintain an efficient communication.This paper's major contribution is to adapt 'Level-of-Detail'to the facial animation in order to solve the above problem.Level-of-Detail has been studied in the field of computer graphics to reperesent the appearance of complicated objects in efficient and adaptive way, but until now no attempt has mode in the field of facial animation. In this paper, we present a systematic scheme which enables this kind of adaptive control using Level-of-Detail.The implemented system can generate speech synchronized facial expressions with various types of user input such as text, voice, GUI, head motion, etc.

  • PDF

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

Feature Based Techniques for a Driver's Distraction Detection using Supervised Learning Algorithms based on Fixed Monocular Video Camera

  • Ali, Syed Farooq;Hassan, Malik Tahir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3820-3841
    • /
    • 2018
  • Most of the accidents occur due to drowsiness while driving, avoiding road signs and due to driver's distraction. Driver's distraction depends on various factors which include talking with passengers while driving, mood disorder, nervousness, anger, over-excitement, anxiety, loud music, illness, fatigue and different driver's head rotations due to change in yaw, pitch and roll angle. The contribution of this paper is two-fold. Firstly, a data set is generated for conducting different experiments on driver's distraction. Secondly, novel approaches are presented that use features based on facial points; especially the features computed using motion vectors and interpolation to detect a special type of driver's distraction, i.e., driver's head rotation due to change in yaw angle. These facial points are detected by Active Shape Model (ASM) and Boosted Regression with Markov Networks (BoRMaN). Various types of classifiers are trained and tested on different frames to decide about a driver's distraction. These approaches are also scale invariant. The results show that the approach that uses the novel ideas of motion vectors and interpolation outperforms other approaches in detection of driver's head rotation. We are able to achieve a percentage accuracy of 98.45 using Neural Network.

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

Using dental virtual patients with dynamic occlusion in esthetic restoration of anterior teeth: case reports (동적 교합을 나타내는 가상 환자의 형성을 통한 심미적인 전치부 보철 수복 증례)

  • Phil-Joon Koo;Yu-Sung Choi;Jong-Hyuk Lee;Seung-Ryong Ha
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.61 no.4
    • /
    • pp.328-343
    • /
    • 2023
  • Recently, a method of fabricating an esthetic anterior fixed prosthesis by integrating data such as three-dimensional facial scan and jaw motion to form a virtual patient with dynamic occlusion has been introduced. This enables smooth communication with patients during the diagnosis process, improves the predictability of esthetic prosthetic treatment, and lowers the possibility of occlusal adjustment. In this case report, a virtual patient with dynamic occlusion was created in which the results of the treatment were simulated, and esthetic maxillary anterior fixed prosthesis was fabricated. With the aid of the virtual patient, the final restorations were satisfactory both in terms of esthetic and function.

Short-term changes in muscle activity and jaw movement patterns after orthognathic surgery in skeletal Class III patients with facial asymmetry

  • Kim, Kyung-A;Park, Hong-Sik;Lee, Soo-Yeon;Kim, Su-Jung;Baek, Seung-Hak;Ahn, Hyo-Won
    • The korean journal of orthodontics
    • /
    • v.49 no.4
    • /
    • pp.254-264
    • /
    • 2019
  • Objective: To evaluate the short-term changes in masticatory muscle activity and mandibular movement patterns after orthognathic surgery in skeletal Class III patients with facial asymmetry. Methods: Twenty-seven skeletal Class III adult patients were divided into two groups based on the degree of facial asymmetry: the experimental group (n = 17 [11 male and 6 female]; menton deviation ${\geq}4mm$) and control group (n = 10 [4 male and 6 female]; menton deviation < 1.6 mm). Cephalography, electromyography (EMG) for the anterior temporalis (TA) and masseter muscles (MM), and mandibular movement (range of motion [ROM] and average chewing pattern [ACP]) were evaluated before (T0) and 7 to 8 months (T1) after the surgery. Results: There were no significant postoperative changes in the EMG potentials of the TA and MM in both groups, except in the anterior cotton roll biting test, in which the masticatory muscle activity had changed into an MM-dominant pattern postoperatively in both groups. In the experimental group, the amount of maximum opening, protrusion, and lateral excursion to the non-deviated side were significantly decreased. The turning point tended to be shorter and significantly moved medially during chewing in the non-deviated side in the experimental group. Conclusions: In skeletal Class III patients with facial asymmetry, the EMG activity characteristics recovered to presurgical levels within 7 to 8 months after the surgery. Correction of the asymmetry caused limitation in jaw movement in terms of both ROM and ACP on the non-deviated side.

Material Characteristics of Dental Implant System with In-Vitro Mastication Loading

  • Jeong, Tae-Gon;Jeong, Yong-Hun;Lee, Su-Won;Yang, Jae-Ung;Jeong, Jae-Yeong;Park, Gwang-Min;Gang, Gwan-Su
    • Proceedings of the Korean Institute of Surface Engineering Conference
    • /
    • 2018.06a
    • /
    • pp.72-72
    • /
    • 2018
  • A dynamic fatigue characteristic of dental implant system has been evaluated with applying single axial compressive shear loading based on the ISO 14801 standard. For the advanced dynamic fatigue test, multi-directional force and motion needed to be accompanied for more information of mechanical properties as based on mastication in oral environment. In this study, we have prepared loading and motion protocol for the multi-directional fatigue test of dental implant system with single (Apical/Occlusal; AO), and additional mastication motion (Lingual/Facial; LF, Mesial/Distal; MD). As following the prepared protocol (with modification of ISO 14801), fatigue test was conducted to verify the worst case results for the development of highly stabilized dental implant system. Mechanical testing was performed using an universal testing machine (MTS Bionix 858, MN, USA) for static compression and single directional loading fatigue, while the multi-directional loading was performed with joint simulator (ADL-Force 5, MA, USA) under load control. Basically, all mechanical test was performed according to the ISO 14801:2016 standard. Static compression test was performed to identify the maximum fracture force with loading speed of 1.0 mm/min. A dynamic fatigue test was performed with 40 % value of maximum fracture force and 5 Hz loading frequency. A single directional fatigue test was performed with only apical/occlusal (AO) force application, while multi directional fatigue tests were applied $2^{\circ}$ of facial/lingual (FL) or mesial/distal (MD) movement. Fatigue failure cycles were entirely different between applying single-directional loading and multi-directional loading. As a comparison of these loading factor, the failure cycle was around 5 times lower than single-directional loading while applied multi-directional loading. Also, the displacement change with accumulated multi-directional fatigue cycles was higher than that of single directional cycles.

  • PDF

LATERAL CEPHALOMETRIC ANALYSIS OF ASYMPTOMATIC VOLUNTEERS AND SYMPTOMATIC PATIENTS WITH TEMPOROMANDIBULAR INTERNAL DERANGEMENT (악관절 내장증 환자와 정상인의 두부방사선규격사진의 분석비교)

  • Shin, Sang-Hun;Park, Sung-Jin
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.25 no.4
    • /
    • pp.330-336
    • /
    • 1999
  • Study of dentofacial structure relationships relative to TMJ internal derangement is required to increase the predictability of TMJ internal derangement. But few studies have been reported. The purpose of this study is to reveal any correlation of dentofacial characteristics with TMJ internal derangement by lateral cephalometric analysis. Patients were devided into two groups. (1) Symptomatic patients with TMJ internal derangement (2) Asymptomatic volunteers with no TMJ internal derangement. Twenty symptomatic patients with TMJ internal derangement(7male, 13female) were selected from our clinic and had undergone a standarized clinical examination, panorama, transcranical view, TMJ tomography. Twenty asymptomatic volunteers(9male, 11female) were selected from our clinic with no pain, no limitation of motion. All subjects had undergone lateral cephalometric analysis. The results were obtained as follows. 1. No significant difference between ID and normal group is detected in cranial base. 2. Maxilla position of ID group is located more posterioly than normal group. 3. Mandible position of ID group is located more posteriorly than normal group and facial profile is hyperdivergent. 4. Posterior facial height of ID group is less than normal group thus facial profile is hyperdivergent. The patients, as mentioned, have a high prevalance of ID thus it should be careful in TMJ ID diagnosis and treatment.

  • PDF

Error Concealment Based on Semantic Prioritization with Hardware-Based Face Tracking

  • Lee, Jae-Beom;Park, Ju-Hyun;Lee, Hyuk-Jae;Lee, Woo-Chan
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.535-544
    • /
    • 2004
  • With video compression standards such as MPEG-4, a transmission error happens in a video-packet basis, rather than in a macroblock basis. In this context, we propose a semantic error prioritization method that determines the size of a video packet based on the importance of its contents. A video packet length is made to be short for an important area such as a facial area in order to reduce the possibility of error accumulation. To facilitate the semantic error prioritization, an efficient hardware algorithm for face tracking is proposed. The increase of hardware complexity is minimal because a motion estimation engine is efficiently re-used for face tracking. Experimental results demonstrate that the facial area is well protected with the proposed scheme.

  • PDF

The Multi-marker Tracking for Facial Animation (Facial Animation을 위한 다중 마커의 추적)

  • 이문희;김철기;김경석
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.06a
    • /
    • pp.553-557
    • /
    • 2001
  • 얼굴 표정을 애니메이션하는 것은 얼굴 구조의 복잡성과 얼굴 표면의 섬세한 움직임으로 인해 컴퓨터 애니메이션 분야에서 가장 어려운 분야로 인식되고 있다. 최근 3D 애니메이션, 영화 특수효과 그리고 게임 제작시 모션 캡처 시스템(Motion Capture System)을 통하여 실제 인간의 동작 및 얼굴 표정을 수치적으로 측정해내어 이를 실제 애니메이션에 직접 사용함으로써 막대한 작업시간 및 인력 그리고 자본을 획기적으로 줄이고 있다. 그러나 기존의 모션 캡처 시스템은 고속 카메라를 이용함으로써 가격이 고가이고 움직임 추적에서도 여러 가지 문제점을 가지고 있다. 본 논문에서는 일반 저가의 카메라와 신경회로망 및 영상처리기법을 이용하여 얼굴 애니메이션용 모션 캡처 시스템에 적응할 수 있는 경제적이고 효율적인 얼굴 움직임 추적기법을 제안한다.

  • PDF