• Title/Summary/Keyword: expression recognition

Search Result 717, Processing Time 0.03 seconds

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF

High Efficiency Adaptive Facial Expression Recognition based on Incremental Active Semi-Supervised Learning (점진적 능동준지도 학습 기반 고효율 적응적 얼굴 표정 인식)

  • Kim, Jin-Woo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.165-171
    • /
    • 2017
  • It is difficult to recognize Human's facial expression in the real-world. For these reason, when database and test data have similar condition, we can accomplish high accuracy. Solving these problem, we need to many facial expression data. In this paper, we propose the algorithm for gathering many facial expression data within various environment and gaining high accuracy quickly. This algorithm is training initial model with the ASSL (Active Semi-Supervised Learning) using deep learning network, thereafter gathering unlabeled facial expression data and repeating this process. Through using the ASSL, we gain proper data and high accuracy with less labor force.

Expression of Various Pattern Recognition Receptors in Gingival Epithelial Cells

  • Shin, Ji-Eun;Ji, Suk;Choi, Young-Nim
    • International Journal of Oral Biology
    • /
    • v.33 no.3
    • /
    • pp.77-82
    • /
    • 2008
  • Innate immune response is initiated by the recognition of unique microbial molecular patterns through pattern recognition receptors (PRRs). The purpose of this study is to dissect the expression of various PRRs in gingival epithelial cells of differentiated versus undifferentiated states. Differentiation of immortalized human gingival epithelial HOK-16B cells was induced by culture in the presence of high $Ca^{2+}$ at increased cell density. The expression levels of various PRRs in HOK-16B cells were examined by realtime reverse transcription polymerase chain reaction (RTPCR) and flow cytometry. In addition, the expression of human beta defensins (HBDs) was examined by real time RT-PCR and the amounts of secreted cytokines were measured by enzyme linked immunosorbent assay. In undifferentiated HOK-16B cells, NACHT-LRR-PYDcontaining protein (NALP) 2 was expressed most abundantly, and toll like receptor (TLR) 2, TLR4, nucleotide-binding oligomerization domain (NOD) 1, and NOD2 were expressed in substantial levels. However, TLR3, TLR7, TLR8, TLR9, ICE protease-activating factor (IPAF), and NALP6 were hardly expressed. In differentiated cells, the levels of NOD2, NALP2, and TLR4 were different from those in undifferentiated cells at RNA but not at protein levels. Interestingly, differentiated cells expressed the increased levels of HBD-1 and -3 but secreted reduced amount of IL-8. In conclusion, the repertoire of PRRs expressed by gingival epithelial cells is limited, and undifferentiated and differentiated cells express similar levels of PRRs.

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

Galectin-1 from redlip mullet Liza haematocheilia: identification, immune responses, and functional characterization as pattern recognition receptors (PRRs) in host immune defense system

  • Chaehyeon Lim;Hyukjae Kwon;Jehee Lee
    • Fisheries and Aquatic Sciences
    • /
    • v.25 no.11
    • /
    • pp.559-571
    • /
    • 2022
  • Galectins, a family of ß-galactoside-binding lectins, have emerged as soluble mediators in infected cells and pattern recognition receptors (PRRs) responsible for evoking and regulating innate immunity. The present study aimed to evaluate the role of galectin-1 in the host immune response of redlip mullet (Liza haematocheilia). We established a cDNA database for redlip mullet, and the cDNA sequence of galectin-1 (LhGal-1) was characterized. In silico analysis was performed, and the spatial and temporal expression patterns in gills and blood in response to lipopolysaccharide polyinosinic:polycytidylic acid, and Lactococcus garvieae were estimated via quantitative real-time PCR. Functional assays were conducted using recombinant protein to investigate carbohydrate binding, bacterial binding, and bacterial agglutination activity. LhGal-1 was composed of 135 amino acids. Conserved motifs (H-NPR, -N- and -W-E-R) within the carbohydrate recognition domain were found in LhGal-1. The tissue distribution revealed that the healthy stomach expressed high levels of LhGal-1. The temporal monitoring of LhGal-1 mRNA expression in the gill and blood showed its significant upregulation in response to immune challenges with different stimulants. rLhGal-1 exhibited binding activity in response to carbohydrates and bacteria. Moreover, the agglutination of rLhGal-1 against Escherichia coli was observed. Collectively, our findings suggest that LhGal-1 may function as a PRR in redlip mullet. Furthermore, LhGal-1 can be considered a significant gene to play a protective role in redlip mullet immune system.

A facial expressions recognition algorithm using image area segmentation and face element (영역 분할과 판단 요소를 이용한 표정 인식 알고리즘)

  • Lee, Gye-Jeong;Jeong, Ji-Yong;Hwang, Bo-Hyun;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.243-248
    • /
    • 2014
  • In this paper, we propose a method to recognize the facial expressions by selecting face elements and finding its status. The face elements are selected by using image area segmentation method and the facial expression is decided by using the normal distribution of the change rate of the face elements. In order to recognize the proper facial expression, we have built database of facial expressions of 90 people and propose a method to decide one of the four expressions (happy, anger, stress, and sad). The proposed method has been simulated and verified by face element detection rate and facial expressions recognition rate.

Emotion Training: Image Color Transfer with Facial Expression and Emotion Recognition (감정 트레이닝: 얼굴 표정과 감정 인식 분석을 이용한 이미지 색상 변환)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.4
    • /
    • pp.1-9
    • /
    • 2018
  • We propose an emotional training framework that can determine the initial symptom of schizophrenia by using emotional analysis method through facial expression change. We use Emotion API in Microsoft to obtain facial expressions and emotion values at the present time. We analyzed these values and recognized subtle facial expressions that change with time. The emotion states were classified according to the peak analysis-based variance method in order to measure the emotions appearing in facial expressions according to time. The proposed method analyzes the lack of emotional recognition and expressive ability by using characteristics that are different from the emotional state changes classified according to the six basic emotions proposed by Ekman. As a result, the analyzed values are integrated into the image color transfer framework so that users can easily recognize and train their own emotional changes.

Facial Expression Recognition using ICA-Factorial Representation Method (ICA-factorial 표현법을 이용한 얼굴감정인식)

  • Han, Su-Jeong;Kwak, Keun-Chang;Go, Hyoun-Joo;Kim, Sung-Suk;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.371-376
    • /
    • 2003
  • In this paper, we proposes a method for recognizing the facial expressions using ICA(Independent Component Analysis)-factorial representation method. Facial expression recognition consists of two stages. First, a method of Feature extraction transforms the high dimensional face space into a low dimensional feature space using PCA(Principal Component Analysis). And then, the feature vectors are extracted by using ICA-factorial representation method. The second recognition stage is performed by using the Euclidean distance measure based KNN(K-Nearest Neighbor) algorithm. We constructed the facial expression database for six basic expressions(happiness, sadness, angry, surprise, fear, dislike) and obtained a better performance than previous works.