• 제목/요약/키워드: facial recognition technology

검색결과 172건 처리시간 0.023초

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권2E호
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

얼굴 특징 변화에 따른 휴먼 감성 인식 (Human Emotion Recognition based on Variance of Facial Features)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권4호
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

효과적인 얼굴 인식을 위한 특징 분포 및 적응적 인식기 (Feature Variance and Adaptive classifier for Efficient Face Recognition)

  • ;남미영;이필규
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2007년도 추계학술발표대회
    • /
    • pp.34-37
    • /
    • 2007
  • Face recognition is still a challenging problem in pattern recognition field which is affected by different factors such as facial expression, illumination, pose etc. The facial feature such as eyes, nose, and mouth constitute a complete face. Mouth feature of face is under the undesirable effect of facial expression as many factors contribute the low performance. We proposed a new approach for face recognition under facial expression applying two cascaded classifiers to improve recognition rate. All facial expression images are treated by general purpose classifier at first stage. All rejected images (applying threshold) are used for adaptation using GA for improvement in recognition rate. We apply Gabor Wavelet as a general classifier and Gabor wavelet with Genetic Algorithm for adaptation under expression variance to solve this issue. We have designed, implemented and demonstrated our proposed approach addressing this issue. FERET face image dataset have been chosen for training and testing and we have achieved a very good success.

딥 러닝 기술 이용한 얼굴 표정 인식에 따른 이모티콘 추출 연구 (A Study on the Emoticon Extraction based on Facial Expression Recognition using Deep Learning Technique)

  • 정봉재;장범
    • 한국인공지능학회지
    • /
    • 제5권2호
    • /
    • pp.43-53
    • /
    • 2017
  • In this paper, the pattern of extracting the same expression is proposed by using the Android intelligent device to identify the facial expression. The understanding and expression of expression are very important to human computer interaction, and the technology to identify human expressions is very popular. Instead of searching for the emoticons that users often use, you can identify facial expressions with acamera, which is a useful technique that can be used now. This thesis puts forward the technology of the third data is available on the website of the set, use the content to improve the infrastructure of the facial expression recognition accuracy, in order to improve the synthesis of neural network algorithm, making the facial expression recognition model, the user's facial expressions and similar e xpressions, reached 66%.It doesn't need to search for emoticons. If you use the camera to recognize the expression, itwill appear emoticons immediately. So this service is the emoticons used when people send messages to others, and it can feel a lot of convenience. In countless emoticons, there is no need to find emoticons, which is an increasing trend in deep learning. So we need to use more suitable algorithm for expression recognition, and then improve accuracy.

Facial Feature Recognition based on ASNMF Method

  • Zhou, Jing;Wang, Tianjiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권12호
    • /
    • pp.6028-6042
    • /
    • 2019
  • Since Sparse Nonnegative Matrix Factorization (SNMF) method can control the sparsity of the decomposed matrix, and then it can be adopted to control the sparsity of facial feature extraction and recognition. In order to improve the accuracy of SNMF method for facial feature recognition, new additive iterative rules based on the improved iterative step sizes are proposed to improve the SNMF method, and then the traditional multiplicative iterative rules of SNMF are transformed to additive iterative rules. Meanwhile, to further increase the sparsity of the basis matrix decomposed by the improved SNMF method, a threshold-sparse constraint is adopted to make the basis matrix to a zero-one matrix, which can further improve the accuracy of facial feature recognition. The improved SNMF method based on the additive iterative rules and threshold-sparse constraint is abbreviated as ASNMF, which is adopted to recognize the ORL and CK+ facial datasets, and achieved recognition rate of 96% and 100%, respectively. Meanwhile, from the results of the contrast experiments, it can be found that the recognition rate achieved by the ASNMF method is obviously higher than the basic NMF, traditional SNMF, convex nonnegative matrix factorization (CNMF) and Deep NMF.

Facial Gender Recognition via Low-rank and Collaborative Representation in An Unconstrained Environment

  • Sun, Ning;Guo, Hang;Liu, Jixin;Han, Guang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권9호
    • /
    • pp.4510-4526
    • /
    • 2017
  • Most available methods of facial gender recognition work well under a constrained situation, but the performances of these methods have decreased significantly when they are implemented under unconstrained environments. In this paper, a method via low-rank and collaborative representation is proposed for facial gender recognition in the wild. Firstly, the low-rank decomposition is applied to the face image to minimize the negative effect caused by various corruptions and dynamical illuminations in an unconstrained environment. And, we employ the collaborative representation to be as the classifier, which using the much weaker $l_2-norm$ sparsity constraint to achieve similar classification results but with significantly lower complexity. The proposed method combines the low-rank and collaborative representation to an organic whole to solve the task of facial gender recognition under unconstrained environments. Extensive experiments on three benchmarks including AR, CAS-PERL and YouTube are conducted to show the effectiveness of the proposed method. Compared with several state-of-the-art algorithms, our method has overwhelming superiority in the aspects of accuracy and robustness.

얼굴 인식을 통한 동적 감정 분류 (Dynamic Emotion Classification through Facial Recognition)

  • 한우리;이용환;박제호;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제12권3호
    • /
    • pp.53-57
    • /
    • 2013
  • Human emotions are expressed in various ways. It can be expressed through language, facial expression and gestures. In particular, the facial expression contains many information about human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. This paper proposes a emotional expression algorithm using Active Appearance Model(AAM) and Fuzz k- Nearest Neighbor which give facial expression in similar with vague human emotion. Applying Mahalanobis distance on the center class, determine inclusion level between center class and each class. Also following inclusion level, appear intensity of emotion. Our emotion recognition system can recognize a complex emotion using Fuzzy k-NN classifier.

A case of Noonan syndrome diagnosed using the facial recognition software (FACE2GENE)

  • Kim, Soo Kyoung;Jung, So Yoon;Bae, Seong Phil;Kim, Jieun;Lee, Jeongho;Lee, Dong Hwan
    • Journal of Genetic Medicine
    • /
    • 제16권2호
    • /
    • pp.81-84
    • /
    • 2019
  • Clinicians often have difficulties diagnosing patients with subtle phenotypes of Noonan syndrome phenotypes. Facial recognition technology can help in the identification of several genetic syndromes with facial dysmorphic features, especially those with mild or atypical phenotypes. A patient visited our clinic at 5 years of age with short stature. She was administered growth hormone treatment for 6 years, but her growth curve was still below the 3rd percentile. She and her mother had wide-spaced eyes and short stature, but there were no other remarkable features of a genetic syndrome. We analyzed their photographs using a smartphone facial recognition application. The results suggested Noonan syndrome; therefore, we performed targeted next-generation sequencing of genes associated with short stature. The results showed that they had a mutation on the PTPN11 gene known as the pathogenic mutation of Noonan syndrome. Facial recognition technology can help in the diagnosis of Noonan syndrome and other genetic syndromes, especially in patients with mild phenotypes.

특허로 살펴본 얼굴인식 기술개발 동향 (Face Recognition Technology Trends Through Patent Analysis)

  • 정선화;최병철
    • 전자통신동향분석
    • /
    • 제34권2호
    • /
    • pp.29-39
    • /
    • 2019
  • The interest in facial recognition technology has been growing with the advancement of AI technology. With a confirmed accuracy of over 99%, the areas of application of the technology have expanded, including smartphone unlocking, online payment authorization, building access management, and criminal apprehension. This indicates that the technology has effectively transitioned from laboratory to field applications. This study performs patent analysis to determine recent innovations and diffusion trends in facial recognition technology. Specifically, R&D activities involving facial recognition technology are investigated at both the country level and company level. Significant patents are also considered. This study contributes to R&D management teams by proposing useful plans and strategies.