• Title/Summary/Keyword: Facial Model

Search Result 523, Processing Time 0.026 seconds

Monophthong Recognition Optimizing Muscle Mixing Based on Facial Surface EMG Signals (안면근육 표면근전도 신호기반 근육 조합 최적화를 통한 단모음인식)

  • Lee, Byeong-Hyeon;Ryu, Jae-Hwan;Lee, Mi-Ran;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.143-150
    • /
    • 2016
  • In this paper, we propose Korean monophthong recognition method optimizing muscle mixing based on facial surface EMG signals. We observed that EMG signal patterns and muscle activity may vary according to Korean monophthong pronunciation. We use RMS, VAR, MMAV1, MMAV2 which were shown high recognition accuracy in previous study and Cepstral Coefficients as feature extraction algorithm. And we classify Korean monophthong by QDA(Quadratic Discriminant Analysis) and HMM(Hidden Markov Model). Muscle mixing optimized using input data in training phase, optimized result is applied in recognition phase. Then New data are input, finally Korean monophthong are recognized. Experimental results show that the average recognition accuracy is 85.7% in QDA, 75.1% in HMM.

Face Super-Resolution using Adversarial Distillation of Multi-Scale Facial Region Dictionary (다중 스케일 얼굴 영역 딕셔너리의 적대적 증류를 이용한 얼굴 초해상화)

  • Jo, Byungho;Park, In Kyu;Hong, Sungeun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.608-620
    • /
    • 2021
  • Recent deep learning-based face super-resolution (FSR) works showed significant performances by utilizing facial prior knowledge such as facial landmark and dictionary that reflects structural or semantic characteristics of the human face. However, most of these methods require additional processing time and memory. To solve this issue, this paper propose an efficient FSR models using knowledge distillation techniques. The intermediate features of teacher network which contains dictionary information based on major face regions are transferred to the student through adversarial multi-scale features distillation. Experimental results show that the proposed model is superior to other SR methods, and its effectiveness compare to teacher model.

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.

Divide and Conquer Strategy for CNN Model in Facial Emotion Recognition based on Thermal Images (얼굴 열화상 기반 감정인식을 위한 CNN 학습전략)

  • Lee, Donghwan;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.1-10
    • /
    • 2021
  • The ability to recognize human emotions by computer vision is a very important task, with many potential applications. Therefore the demand for emotion recognition using not only RGB images but also thermal images is increasing. Compared to RGB images, thermal images has the advantage of being less affected by lighting conditions but require a more sophisticated recognition method with low-resolution sources. In this paper, we propose a Divide and Conquer-based CNN training strategy to improve the performance of facial thermal image-based emotion recognition. The proposed method first trains to classify difficult-to-classify similar emotion classes into the same class group by confusion matrix analysis and then divides and solves the problem so that the emotion group classified into the same class group is recognized again as actual emotions. In experiments, the proposed method has improved accuracy in all the tests than when recognizing all the presented emotions with a single CNN model.

3D face recognition based on radial basis function network (방사 기저 함수 신경망을 이용한 3차원 얼굴인식)

  • Yang, Uk-Il;Sohn, Kwang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.82-92
    • /
    • 2007
  • This paper describes a novel global shape (GS) feature based on radial basis function network (RBFN) and the extraction method of the proposed feature for 3D face recognition. RBFN is the weighted sum of RBfs, it well present the non-linearity of a facial shape using the linear combination of RBFs. It is the proposed facial feature that the weights of RBFN learned by the horizontal profiles of a face. RBFN based feature expresses the locality of the facial shape even if it is GS feature, and it reduces the feature complexity like existing global methods. And it also get the smoothing effect of the facial shape. Through the experiments, we get 94.7% using the proposed feature and hidden markov model (HMM) to match the features for 100 gallery set with those for 300 test set.

A Study on the Face Image to Shape Differences and Make up (얼굴의 형태적 특성과 메이크업에 의한 얼굴 이미지 연구)

  • Song, Mi-Young;Park, Oak-Reon;Lee, Young-Ju
    • Korean Journal of Human Ecology
    • /
    • v.14 no.1
    • /
    • pp.143-153
    • /
    • 2005
  • The purpose of this research is to study face images according to the difference of facial shape and make-up. A variety of face images can be formulated by computer graphic simulation, combining numerously different facial shapes and make-up styles. In order to check out the diverse images by make-up styles, we applied five forms of eye brows, two types of eye shadows, and three lip shapes to the round-shaped face of a model. The question sheet, used with a operational stimulant in the experiment, contained 28 articles, composed of a pair of bi-ended adjective in 7 point scale. Data were analyzed using Varimax perpendicular rotation method, Duncan's Multiple Range Test, and Three-way ANOVA. After comparing various results of make-up application to various face types, we could find that facial shape, eye-brows, eye-shadow, and lip shapes influence interactively on total facial images. As a result of make-up image perception analyses, a factor structure was divided into mildness, modernness, elegance, and sociableness. Speaking of make-up image in terms of those factors, round form make-up style showed the highest level of mildness. Upward and straight style of make-up had the highest of modernness. Elegance level went highest when eye shadow style was round form and lip style was straight. Lastly, an incurve lip make-up style showed the highest of sociableness.

  • PDF

Face Deformation Technique for Efficient Virtual Aesthetic Surgery Models (효과적인 얼굴 가상성형 모델을 위한 얼굴 변형 기법)

  • Park Hyun;Moon Young Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.63-72
    • /
    • 2005
  • In this paper, we propose a deformation technique based on Radial Basis Function (RBF) and a blending technique combining the deformed facial component with the original face for a Virtual Aesthetic Surgery (VAS) system. The deformation technique needs the smoothness and the accuracy to deform the fluid facial components and also needs the locality not to affect or distort the rest of the facial components besides the deformation region. To satisfy these deformation characteristics, The VAS System computes the degree of deformation of lattice cells using RBF based on a Free-Form Deformation (FFD) model. The deformation error is compensated by the coefficients of mapping function, which is recursively solved by the Singular Value Decomposition (SVD) technique using SSE (Sum of Squared Error) between the deformed control points and target control points on base curves. The deformed facial component is blended with an original face using a blending ratio that is computed by the Euclidean distance transform. An experimental result shows that the proposed deformation and blending techniques are very efficient in terms of accuracy and distortion.

An Experimental Study about flap Viability after Harvesting of the Composite Face/Scalp flap for Allotransplantation in Rabbit Model (가토의 안면-두피 피판 동종이식을 위한 실험용 모델 연구)

  • Seo, Yeong-Min;Chung, Seung-Moon
    • Archives of Reconstructive Microsurgery
    • /
    • v.14 no.2
    • /
    • pp.95-104
    • /
    • 2005
  • The aim of this study was to investigate the major vascular system to supply flap, flap survival rate and complications after flap elevation in order to evaluate possibility of the vascularized face/scalp allotransplantation. Forty New Zealand white rabbits were divided into two groups: control group and experimental group. Individuals of control group had a face/scalp composite unit which was composed of skin, subcutaneous tissue and platysma muscle, supplying by bilateral facial artery, temporal artery and auricular artery and draining by external jugular vein. After a flap was elevated, bilateral facial artery, temporal artery and auricular artery were ligated. On the other hand, those of experimental group had the same composite unit as control group with bilateral facial artery, temporal artery and auricular artery being not ligated. We had measured survival area of flaps of the sixteen individuals survived for four weeks in the control group and fourteen in the experimental group by Grid method. The mean survival durations of the flap were 3.7days in the control group, 20.0days in the experimental group. The significant differences in the mean survival durations and survival rate at the 28days were found between the control and experimental group (p<0.05). Mean values about the survival area's fractions of all were $1.3{\pm}4.%$ in the control group and $63.1{\pm}4.8%$ in the experimental group. Those of experimental group was significantly higher than control group statistically (p<0.05). The composite face/scalp flap which we have elevated, supplied by bilateral facial artery, temporal artery, auricular artery and drained by external jugular vein has flap viability enough to be transplanted after its elevation.

  • PDF

The improved facial expression recognition algorithm for detecting abnormal symptoms in infants and young children (영유아 이상징후 감지를 위한 표정 인식 알고리즘 개선)

  • Kim, Yun-Su;Lee, Su-In;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.430-436
    • /
    • 2021
  • The non-contact body temperature measurement system is one of the key factors, which is manage febrile diseases in mass facilities using optical and thermal imaging cameras. Conventional systems can only be used for simple body temperature measurement in the face area, because it is used only a deep learning-based face detection algorithm. So, there is a limit to detecting abnormal symptoms of the infants and young children, who have difficulty expressing their opinions. This paper proposes an improved facial expression recognition algorithm for detecting abnormal symptoms in infants and young children. The proposed method uses an object detection model to detect infants and young children in an image, then It acquires the coordinates of the eyes, nose, and mouth, which are key elements of facial expression recognition. Finally, facial expression recognition is performed by applying a selective sharpening filter based on the obtained coordinates. According to the experimental results, the proposed algorithm improved by 2.52%, 1.12%, and 2.29%, respectively, for the three expressions of neutral, happy, and sad in the UTK dataset.

3D model for korean-japanese sign language image communication (한-일 수화 영상통신을 위한 3차원 모델)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.929-932
    • /
    • 1998
  • In this paper we propose a method of representing emotional experessions and lip shapes for sign language communication using 3-dimensional model. At first we employ the action units (AU) of facial action coding system(FACS) to display all shapes. Then we define 11 basic lip shapes and sounding times of each components in a syllable in order to synthesize the lip shapes more precisely for korean characters. Experimental results show that the proposed method could be used efficiently for the sign language image communication between different languages.

  • PDF