• 제목/요약/키워드: expression recognition

검색결과 713건 처리시간 0.032초

A Study on the Facial Expression Recognition using Deep Learning Technique

  • Jeong, Bong Jae;Kang, Min Soo;Jung, Yong Gyu
    • International Journal of Advanced Culture Technology
    • /
    • 제6권1호
    • /
    • pp.60-67
    • /
    • 2018
  • In this paper, the pattern of extracting the same expression is proposed by using the Android intelligent device to identify the facial expression. The understanding and expression of expression are very important to human computer interaction, and the technology to identify human expressions is very popular. Instead of searching for the symbols that users often use, you can identify facial expressions with a camera, which is a useful technique that can be used now. This thesis puts forward the technology of the third data is available on the website of the set, use the content to improve the infrastructure of the facial expression recognition accuracy, to improve the synthesis of neural network algorithm, making the facial expression recognition model, the user's facial expressions and similar expressions, reached 66%. It doesn't need to search for symbols. If you use the camera to recognize the expression, it will appear symbols immediately. So, this service is the symbols used when people send messages to others, and it can feel a lot of convenience. In countless symbols, there is no need to find symbols, which is an increasing trend in deep learning. So, we need to use more suitable algorithm for expression recognition, and then improve accuracy.

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • 제6권2호
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권4호
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

A Facial Expression Recognition Method Using Two-Stream Convolutional Networks in Natural Scenes

  • Zhao, Lixin
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.399-410
    • /
    • 2021
  • Aiming at the problem that complex external variables in natural scenes have a greater impact on facial expression recognition results, a facial expression recognition method based on two-stream convolutional neural network is proposed. The model introduces exponentially enhanced shared input weights before each level of convolution input, and uses soft attention mechanism modules on the space-time features of the combination of static and dynamic streams. This enables the network to autonomously find areas that are more relevant to the expression category and pay more attention to these areas. Through these means, the information of irrelevant interference areas is suppressed. In order to solve the problem of poor local robustness caused by lighting and expression changes, this paper also performs lighting preprocessing with the lighting preprocessing chain algorithm to eliminate most of the lighting effects. Experimental results on AFEW6.0 and Multi-PIE datasets show that the recognition rates of this method are 95.05% and 61.40%, respectively, which are better than other comparison methods.

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • 제19권3호
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

퍼지 신경망과 강인한 영상 처리를 이용한 개인화 얼굴 표정 인식 시스템 (Personalized Facial Expression Recognition System using Fuzzy Neural Networks and robust Image Processing)

  • 김대진;김종성;변증남
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.25-28
    • /
    • 2002
  • This paper introduce a personalized facial expression recognition system. Many previous works on facial expression recognition system focus on the formal six universal facial expressions. However, it is very difficult to make such expressions for normal person without much effort and training. And in these days, the personalized service is also mainly focused by many researchers in various fields. Thus, we Propose a novel facial expression recognition system with fuzzy neural networks and robust image processing.

  • PDF

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

표정 정규화를 통한 얼굴 인식율 개선 (Improvement of Face Recognition Rate by Normalization of Facial Expression)

  • 김진옥
    • 정보처리학회논문지B
    • /
    • 제15B권5호
    • /
    • pp.477-486
    • /
    • 2008
  • 얼굴의 기하학적 특징이 변하여 생기는 표정은 얼굴 인식 시스템의 인식 결과에 다양한 영향을 끼친다. 얼굴 인식율을 개선하기 위해 본 연구에서는 인식 대상 얼굴과 참조 얼굴 사이의 표정 차이를 줄이는 방법으로 얼굴 표정 정규화를 제안한다. 본 연구에서는 대형의 이미지 데이터베이스를 구축하지 않고도 한 개의 정지 이미지에 일반적인 얼굴 근육 모델을 이용하는 접근 방식을 제시하여 얼굴 표정 모델링과 정규화를 처리한다. 첫 번째 방식은 본능적으로 변하는 얼굴 표정의 생물학적 모델을 구축하기 위해 선형 근육 모델의 기하학적 계수를 예측하는 것이다. 두 번째 방식은 RBF(Radial Basis Function)기반의 보간과 와핑을 통해 주어진 표정에 따라 얼굴 근육 모델을 무표정한 얼굴로 정규화한 것이다. 실험 결과, 기저얼굴 방식, 지역 이진 패턴 방식, 회색조 상관측정 방식과 같은 얼굴 인식 과정의 전처리 단계로 본 연구의 표정 정규화 과정을 적용하면 정규화를 거치지 않은 것보다 더 높은 인식율을 보인다.

Multi-classifier Fusion Based Facial Expression Recognition Approach

  • Jia, Xibin;Zhang, Yanhua;Powers, David;Ali, Humayra Binte
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권1호
    • /
    • pp.196-212
    • /
    • 2014
  • Facial expression recognition is an important part in emotional interaction between human and machine. This paper proposes a facial expression recognition approach based on multi-classifier fusion with stacking algorithm. The kappa-error diagram is employed in base-level classifiers selection, which gains insights about which individual classifier has the better recognition performance and how diverse among them to help improve the recognition accuracy rate by fusing the complementary functions. In order to avoid the influence of the chance factor caused by guessing in algorithm evaluation and get more reliable awareness of algorithm performance, kappa and informedness besides accuracy are utilized as measure criteria in the comparison experiments. To verify the effectiveness of our approach, two public databases are used in the experiments. The experiment results show that compared with individual classifier and two other typical ensemble methods, our proposed stacked ensemble system does recognize facial expression more accurately with less standard deviation. It overcomes the individual classifier's bias and achieves more reliable recognition results.

Fear and Surprise Facial Recognition Algorithm for Dangerous Situation Recognition

  • Kwak, NaeJoung;Ryu, SungPil;Hwang, IlYoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제7권2호
    • /
    • pp.51-55
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing dangerous situation. The proposed method firstly extracts the facial region using Harr-like technique from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, detects facial expression, and recognizes dangerous situation. The proposed method is evaluated for MUCT database image and web cam input. The proposed method produces good results of facial expression and discriminates dangerous situation well and the average recognition rate is 91.05%.