• Title/Summary/Keyword: Facial Feature Extraction

Search Result 157, Processing Time 0.02 seconds

Face Extraction using Genetic Algorithm, Stochastic Variable and Geometrical Model (유전 알고리즘, 통계적 변수, 기하학적 모델에 의한 얼굴 영역 추출)

  • 이상진;홍준표이종실홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.891-894
    • /
    • 1998
  • This paper introduces an automatic face region extraction method. This method consists of two part: face recognition and extraction of facial organs which are eye, eyebrow, nose and mouth. In first stage, we use genetic algorithms(GAs) to get face region in complex background. In second stage, we use Geometrical Face Model to textract eye, eyebrow, nose and mouth. In both stage, stochastic component is used to deal with the problems caused by had lighting condition. According to this value, blurring number is determined. Average Computation time is less than 1 sec, and using this method we can extract facial feature efficiently from several images which has different lightning condition.

  • PDF

Cluster Headache-like Facial Pain following Dental Extraction: A Case Report

  • Byun, Jin-Seok;Jung, Jae-Kwang;Choi, Jae-Kap
    • Journal of Oral Medicine and Pain
    • /
    • v.39 no.3
    • /
    • pp.115-118
    • /
    • 2014
  • A 50-year-old female patient with severe unilateral pain in the right eye, head, and face accompanied by lacrimation and drooping of the right eye and rhinorrhea from the right nose, which developed immediately after extraction of the maxillary right first and second molars, was successfully treated with oral administration of sumatriptan and prednisolone, or verapamile. Although the clinical characteristics are similar to those reported in cluster headache except the temporal feature, the probable cluster headache, the hemicrania continua and the acute migraine headache should be included in the list of differential diagnoses.

Face classification and analysis based on geometrical feature of face (얼굴의 기하학적 특징정보 기반의 얼굴 특징자 분류 및 해석 시스템)

  • Jeong, Kwang-Min;Kim, Jung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1495-1504
    • /
    • 2012
  • This paper proposes an algorithm to classify and analyze facial features such as eyebrow, eye, mouth and chin based on the geometric features of the face. As a preprocessing process to classify and analyze the facial features, the algorithm extracts the facial features such as eyebrow, eye, nose, mouth and chin. From the extracted facial features, it detects the shape and form information and the ratio of distance between the features and formulated them to evaluation functions to classify 12 eyebrows types, 3 eyes types, 9 mouth types and 4 chine types. Using these facial features, it analyzes a face. The face analysis algorithm contains the information about pixel distribution and gradient of each feature. In other words, the algorithm analyzes a face by comparing such information about the features.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Development of Character Input System using Facial Muscle Signal and Minimum List Keyboard (안면근 신호를 이용한 최소 자판 문자 입력 시스템의 개발)

  • Kim, Hong-Hyun;Park, Hyun-Seok;Kim, Eung-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.289-292
    • /
    • 2009
  • A person does communication between each other using language. But In the case of disabled person can not communication own idea to use writing and gesture. Therefore, In this paper, we embodied communication system using the facial muscle signals so that disabled person can do communication. Especially, After feature extraction of the EEG included facial muscle, it is converted the facial muscle into control signal, and then select character and communicate using a minimum list keyboard.

  • PDF

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

Facial Features Extraction for Sasang Constitution Classification (사상채질 분류를 위한 안면부내 특징 요소 추출)

  • Bae, Na-Yeong;An, Taek-Won;Jo, Dong-Uk;Lee, Hwa-Seop
    • Journal of Sasang Constitutional Medicine
    • /
    • v.17 no.2
    • /
    • pp.46-51
    • /
    • 2005
  • 1. Objectives The purpose of this study is to objectify the diagnosis of Sasang Constitution. Using the methods of this study, it will improve to classificate Sasang Constitution. 2. Methods 1) Automatic feature extraction of human frontal faces for Sasang Constitution classification. 2) Color feature extraction of human frontal faces (1)Erosion filtering (skin-white, the other-black) (2) Median median 3. Results and Conclusions Observing a person's shape has been the major method for Sasang Constitution classification, which usually has been dependent upon doctor's intuition as of these days. We are developing an automatic system which provides objective basic data for Sasang Constitution classification. For this, in this paper, firstly, the signal processing techniques are applied to automatic feature extraction of human frontal faces for Sasang Constitution classification. The experiment is conducted to verify the effectiveness of the proposed system.

  • PDF

Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm (Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.395-400
    • /
    • 2011
  • In this paper, we propose an study of the facial emotion recognition considering the dynamical variation of emotional state in facial image sequences. The proposed system consists of two main step: facial image based emotional feature extraction and emotional state classification/recognition. At first, we propose a method for extracting and analyzing the emotional feature region using a combination of Active Shape Model (ASM) and Facial Action Units (FAUs). And then, it is proposed that emotional state classification and recognition method based on Hidden Markov Model (HMM) type of dynamic Bayesian network. Also, we adopt a Harmony Search (HS) algorithm based heuristic optimization procedure in a parameter learning of HMM in order to classify the emotional state more accurately. By using all these methods, we construct the emotion recognition system based on variations of the dynamic facial image sequence and make an attempt at improvement of the recognition performance.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Adaptive Facial Expression Recognition System based on Gabor Wavelet Neural Network (가버 웨이블릿 신경망 기반 적응 표정인식 시스템)

  • Lee, Sang-Wan;Kim, Dae-Jin;Kim, Yong-Soo;Bien, Zeungnam
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.1-7
    • /
    • 2006
  • In this paper, adaptive Facial Emotional Recognition system based on Gabor Wavelet Neural Network, considering six feature Points in face image to extract specific features of facial expression, is proposed. Levenberg-Marquardt-based training methodology is used to formulate initial network, including feature extraction stage. Therefore, heuristics in determining feature extraction process can be excluded. Moreover, to make an adaptive network for new user, Q-learning which has enhanced reward function and unsupervised fuzzy neural network model are used. Q-learning enables the system to ge optimal Gabor filters' sets which are capable of obtaining separable features, and Fuzzy Neural Network enables it to adapt to the user's change. Therefore, proposed system has a good on-line adaptation capability, meaning that it can trace the change of user's face continuously.