• 제목/요약/키워드: expression recognition

검색결과 717건 처리시간 0.02초

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • 제9권1호
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

Enhanced Independent Component Analysis of Temporal Human Expressions Using Hidden Markov model

  • Lee, J.J.;Uddin, Zia;Kim, T.S.
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.487-492
    • /
    • 2008
  • Facial expression recognition is an intensive research area for designing Human Computer Interfaces. In this work, we present a new facial expression recognition system utilizing Enhanced Independent Component Analysis (EICA) for feature extraction and discrete Hidden Markov Model (HMM) for recognition. Our proposed approach for the first time deals with sequential images of emotion-specific facial data analyzed with EICA and recognized with HMM. Performance of our proposed system has been compared to the conventional approaches where Principal and Independent Component Analysis are utilized for feature extraction. Our preliminary results show that our proposed algorithm produces improved recognition rates in comparison to previous works.

  • PDF

Kinect Sensor- based LMA Motion Recognition Model Development

  • Hong, Sung Hee
    • International Journal of Advanced Culture Technology
    • /
    • 제9권3호
    • /
    • pp.367-372
    • /
    • 2021
  • The purpose of this study is to suggest that the movement expression activity of intellectually disabled people is effective in the learning process of LMA motion recognition based on Kinect sensor. We performed an ICT motion recognition games for intellectually disabled based on movement learning of LMA. The characteristics of the movement through Laban's LMA include the change of time in which movement occurs through the human body that recognizes space and the tension or relaxation of emotion expression. The design and implementation of the motion recognition model will be described, and the possibility of using the proposed motion recognition model is verified through a simple experiment. As a result of the experiment, 24 movement expression activities conducted through 10 learning sessions of 5 participants showed a concordance rate of 53.4% or more of the total average. Learning motion games that appear in response to changes in motion had a good effect on positive learning emotions. As a result of study, learning motion games that appear in response to changes in motion had a good effect on positive learning emotions

Facial Expression Recognition by Combining Adaboost and Neural Network Algorithms (에이다부스트와 신경망 조합을 이용한 표정인식)

  • Hong, Yong-Hee;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • 제20권6호
    • /
    • pp.806-813
    • /
    • 2010
  • Human facial expression shows human's emotion most exactly, so it can be used as the most efficient tool for delivering human's intention to computer. For fast and exact recognition of human's facial expression on a 2D image, this paper proposes a new method which integrates an Discrete Adaboost classification algorithm and a neural network based recognition algorithm. In the first step, Adaboost algorithm finds the position and size of a face in the input image. Second, input detected face image into 5 Adaboost strong classifiers which have been trained for each facial expressions. Finally, neural network based recognition algorithm which has been trained with the outputs of Adaboost strong classifiers determines final facial expression result. The proposed algorithm guarantees the realtime and enhanced accuracy by utilizing fastness and accuracy of Adaboost classification algorithm and reliability of neural network based recognition algorithm. In this paper, the proposed algorithm recognizes five facial expressions such as neutral, happiness, sadness, anger and surprise and achieves 86~95% of accuracy depending on the expression types in real time.

A Study on Fuzzy Wavelet LDA Mixed Model for an effective Face Expression Recognition (효과적인 얼굴 표정 인식을 위한 퍼지 웨이브렛 LDA융합 모델 연구)

  • Rho, Jong-Heun;Baek, Young-Hyun;Moon, Sung-Ryong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • 제16권6호
    • /
    • pp.759-765
    • /
    • 2006
  • In this paper, it is proposed an effective face expression recognition LDA mixed mode using a triangularity membership fuzzy function and wavelet basis. The proposal algorithm gets performs the optimal image, fuzzy wavelet algorithm and Expression recognition is consisted of face characteristic detection step and face Expression recognition step. This paper could applied to the PCA and LDA in using some simple strategies and also compares and analyzes the performance of the LDA mixed model which is combined and the facial expression recognition based on PCA and LDA. The LDA mixed model is represented by the PCA and the LDA approaches. And then we calculate the distance of vectors dPCA, dLDA from all fates in the database. Last, the two vectors are combined according to a given combination rule and the final decision is made by NNPC. In a result, we could showed the superior the LDA mixed model can be than the conventional algorithm.

Hierarchical Hand Pose Model for Hand Expression Recognition (손 표현 인식을 위한 계층적 손 자세 모델)

  • Heo, Gyeongyong;Song, Bok Deuk;Kim, Ji-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제25권10호
    • /
    • pp.1323-1329
    • /
    • 2021
  • For hand expression recognition, hand pose recognition based on the static shape of the hand and hand gesture recognition based on the dynamic hand movement are used together. In this paper, we propose a hierarchical hand pose model based on finger position and shape for hand expression recognition. For hand pose recognition, a finger model representing the finger state and a hand pose model using the finger state are hierarchically constructed, which is based on the open source MediaPipe. The finger model is also hierarchically constructed using the bending of one finger and the touch of two fingers. The proposed model can be used for various applications of transmitting information through hands, and its usefulness was verified by applying it to number recognition in sign language. The proposed model is expected to have various applications in the user interface of computers other than sign language recognition.

Hand Expression Recognition for Virtual Blackboard (가상 칠판을 위한 손 표현 인식)

  • Heo, Gyeongyong;Kim, Myungja;Song, Bok Deuk;Shin, Bumjoo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제25권12호
    • /
    • pp.1770-1776
    • /
    • 2021
  • For hand expression recognition, hand pose recognition based on the static shape of the hand and hand gesture recognition based on hand movement are used together. In this paper, we proposed a hand expression recognition method that recognizes symbols based on the trajectory of a hand movement on a virtual blackboard. In order to recognize a sign drawn by hand on a virtual blackboard, not only a method of recognizing a sign from a hand movement, but also hand pose recognition for finding the start and end of data input is also required. In this paper, MediaPipe was used to recognize hand pose, and LSTM(Long Short Term Memory), a type of recurrent neural network, was used to recognize hand gesture from time series data. To verify the effectiveness of the proposed method, it was applied to the recognition of numbers written on a virtual blackboard, and a recognition rate of about 94% was obtained.

Robust Facial Expression Recognition using PCA Representation (PCA 표상을 이용한 강인한 얼굴 표정 인식)

  • Shin Young-Suk
    • Korean Journal of Cognitive Science
    • /
    • 제16권4호
    • /
    • pp.323-331
    • /
    • 2005
  • This paper proposes an improved system for recognizing facial expressions in various internal states that is illumination-invariant and without detectable rue such as a neutral expression. As a preprocessing to extract the facial expression information, a whitening step was applied. The whitening step indicates that the mean of the images is set to zero and the variances are equalized as unit variances, which reduces murk of the variability due to lightening. After the whitening step, we used the facial expression information based on principal component analysis(PCA) representation excluded the first 1 principle component. Therefore, it is possible to extract the features in the lariat expression images without detectable cue of neutral expression from the experimental results, we ran also implement the various and natural facial expression recognition because we perform the facial expression recognition based on dimension model of internal states on the images selected randomly in the various facial expression images corresponding to 83 internal emotional states.

  • PDF

Risk Situation Recognition Using Facial Expression Recognition of Fear and Surprise Expression (공포와 놀람 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-Jong;Song, Teuk Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제19권3호
    • /
    • pp.523-528
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image to recognize facial expression. The DB has 6 kinds of facial expressions of human being that are basic facial expressions such as smile, sadness, surprise, anger, disgust, and fear expression. The proposed method produces good results of facial expression and discriminates risk situation well.

Development of a Recognition System of Smile Facial Expression for Smile Treatment Training (웃음 치료 훈련을 위한 웃음 표정 인식 시스템 개발)

  • Li, Yu-Jie;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • 제15권4호
    • /
    • pp.47-55
    • /
    • 2010
  • In this paper, we proposed a recognition system of smile facial expression for smile treatment training. The proposed system detects face candidate regions by using Haar-like features from camera images. After that, it verifies if the detected face candidate region is a face or non-face by using SVM(Support Vector Machine) classification. For the detected face image, it applies illumination normalization based on histogram matching in order to minimize the effect of illumination change. In the facial expression recognition step, it computes facial feature vector by using PCA(Principal Component Analysis) and recognizes smile expression by using a multilayer perceptron artificial network. The proposed system let the user train smile expression by recognizing the user's smile expression in real-time and displaying the amount of smile expression. Experimental result show that the proposed system improve the correct recognition rate by using face region verification based on SVM and using illumination normalization based on histogram matching.