• Title/Summary/Keyword: Facial components

Search Result 133, Processing Time 0.034 seconds

Facial Characteristic Point Extraction for Representation of Facial Expression (얼굴 표정 표현을 위한 얼굴 특징점 추출)

  • Oh, Jeong-Su;Kim, Jin-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.117-122
    • /
    • 2005
  • This paper proposes an algorithm for Facial Characteristic Point(FCP) extraction. The FCP plays an important role in expression representation for face animation, avatar mimic or facial expression recognition. Conventional algorithms extract the FCP with an expensive motion capture device or by using markers, which give an inconvenience or a psychological load to experimental person. However, the proposed algorithm solves the problems by using only image processing. For the efficient FCP extraction, we analyze and improve the conventional algorithms detecting facial components, which are basis of the FCP extraction.

Hybrid Neural Classifier Combined with H-ART2 and F-LVQ for Face Recognition

  • Kim, Do-Hyeon;Cha, Eui-Young;Kim, Kwang-Baek
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1287-1292
    • /
    • 2005
  • This paper presents an effective pattern classification model by designing an artificial neural network based pattern classifiers for face recognition. First, a RGB image inputted from a frame grabber is converted into a HSV image which is similar to the human beings' vision system. Then, the coarse facial region is extracted using the hue(H) and saturation(S) components except intensity(V) component which is sensitive to the environmental illumination. Next, the fine facial region extraction process is performed by matching with the edge and gray based templates. To make a light-invariant and qualified facial image, histogram equalization and intensity compensation processing using illumination plane are performed. The finally extracted and enhanced facial images are used for training the pattern classification models. The proposed H-ART2 model which has the hierarchical ART2 layers and F-LVQ model which is optimized by fuzzy membership make it possible to classify facial patterns by optimizing relations of clusters and searching clustered reference patterns effectively. Experimental results show that the proposed face recognition system is as good as the SVM model which is famous for face recognition field in recognition rate and even better in classification speed. Moreover high recognition rate could be acquired by combining the proposed neural classification models.

  • PDF

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

Face Detection using Orientation(In-Plane Rotation) Invariant Facial Region Segmentation and Local Binary Patterns(LBP) (방향 회전에 불변한 얼굴 영역 분할과 LBP를 이용한 얼굴 검출)

  • Lee, Hee-Jae;Kim, Ha-Young;Lee, David;Lee, Sang-Goog
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.692-702
    • /
    • 2017
  • Face detection using the LBP based feature descriptor has issues in that it can not represent spatial information between facial shape and facial components such as eyes, nose and mouth. To address these issues, in previous research, a facial image was divided into a number of square sub-regions. However, since the sub-regions are divided into different numbers and sizes, the division criteria of the sub-region suitable for the database used in the experiment is ambiguous, the dimension of the LBP histogram increases in proportion to the number of sub-regions and as the number of sub-regions increases, the sensitivity to facial orientation rotation increases significantly. In this paper, we present a novel facial region segmentation method that can solve in-plane rotation issues associated with LBP based feature descriptors and the number of dimensions of feature descriptors. As a result, the proposed method showed detection accuracy of 99.0278% from a single facial image rotated in orientation.

A STUDY ON USEFULNESS OF THE REFERENCE LINE IN DIAGNOSIS OF THE FACIAL ASYMMETRY (안모비대칭의 진단용 기준선의 유용성에 관한 연구)

  • Ryu, Sung-Ho;Chang, Hyun-Ho
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.31 no.3
    • /
    • pp.266-273
    • /
    • 2005
  • Purpose: To assess the relationship between soft tissue reference line and hard tissue reference line using the standardized photographs and the posteroanterior cephalometric radiographs(P-A)in facial asymmetric patients and to compare the differences of angular measurement between normal group and asymmetry group. Methods: Normal group consisted of 44 persons with normal occlusion and normal facial morphology. Asymmetry group consisted of 90 patients with facial asymmetry. Standardized facial photographs and P-A were taken in all subjects. The horizontal reference lines were bipupillary line in photographs and latero-orbitale line in P-A respectively. The vertical reference line were the line from the midpoint of horizontal reference line perpendicularly. Angular measurement of otobasion canting, lip canting, nose deviation, chin deviation, and maxillary deviation were compared and analyzed in photographs. And angular measurement of mastoid canting, mandibular canting, nose deviation, chin deviation, and maxillary deviation were compared and analyzed in P-A. Results: 1. The variables of photographs and P-A were significantly related in the asymmetry group. 2. Significant differences between all variables except for PT2 and PA2 were shown in the asymmetry group and between PT1 and PA1, PT3 and PA3 in the normal group respectively. 3. Comparison measurement scores of angular difference between control group and experimental group concerning each variable showed significant difference except for PA1. Conclusions: Soft tissue components may not compensate for underlying skeletal imbalance in nose deviation and chin deviation. The horizontal reference lines in photographs were significant related with the P-A, but angular variables between the two studies show significant differences. Therefore, we do not recommend use photography in the assessment the facial asymmetry as complemented in the P-A.

Study on Facial Expression Factors as Emotional Interaction Design Factors (감성적 인터랙션 디자인 요소로서의 표정 요소에 관한 연구)

  • Heo, Seong-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.17 no.4
    • /
    • pp.61-70
    • /
    • 2014
  • Verbal communication has limits in the interaction between robot and man, and therefore nonverbal communication is required for realizing smoother and more efficient communication and even the emotional expression of the robot. This study derived 7 pieces of nonverbal information based on shopping behavior using the robot designed to support shopping, selected facial expression as the element of the nonverbal information derived, and coded face components through 2D analysis. Also, this study analyzed the significance of the expression of nonverbal information using 3D animation that combines the codes of face components. The analysis showed that the proposed expression method for nonverbal information manifested high level of significance, suggesting the potential of this study as the base line data for the research on nonverbal information. However, the case of 'embarrassment' showed limits in applying the coded face components to shape and requires more systematic studies.

Face Deformation Technique for Efficient Virtual Aesthetic Surgery Models (효과적인 얼굴 가상성형 모델을 위한 얼굴 변형 기법)

  • Park Hyun;Moon Young Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.63-72
    • /
    • 2005
  • In this paper, we propose a deformation technique based on Radial Basis Function (RBF) and a blending technique combining the deformed facial component with the original face for a Virtual Aesthetic Surgery (VAS) system. The deformation technique needs the smoothness and the accuracy to deform the fluid facial components and also needs the locality not to affect or distort the rest of the facial components besides the deformation region. To satisfy these deformation characteristics, The VAS System computes the degree of deformation of lattice cells using RBF based on a Free-Form Deformation (FFD) model. The deformation error is compensated by the coefficients of mapping function, which is recursively solved by the Singular Value Decomposition (SVD) technique using SSE (Sum of Squared Error) between the deformed control points and target control points on base curves. The deformed facial component is blended with an original face using a blending ratio that is computed by the Euclidean distance transform. An experimental result shows that the proposed deformation and blending techniques are very efficient in terms of accuracy and distortion.

Skew correction of face image using eye components extraction (눈 영역 추출에 의한 얼굴 기울기 교정)

  • Yoon, Ho-Sub;Wang, Min;Min, Byung-Woo
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.12
    • /
    • pp.71-83
    • /
    • 1996
  • This paper describes facial component detection and skew correction algorithm for face recognition. We use a priori knowledge and models about isolated regions to detect eye location from the face image captured in natural office environments. The relations between human face components are represented by several rules. We adopt an edge detection algorithm using sobel mask and 8-connected labelling algorith using array pointers. A labeled image has many isolated components. initially, the eye size rules are used. Eye size rules are not affected much by irregular input image conditions. Eye size rules size, and limited in the ratio between gorizontal and vertical sizes. By the eye size rule, 2 ~ 16 candidate eye components can be detected. Next, candidate eye parirs are verified by the information of location and shape, and one eye pair location is decided using face models about eye and eyebrow. Once we extract eye regions, we connect the center points of the two eyes and calculate the angle between them. Then we rotate the face to compensate for the angle so that the two eyes on a horizontal line. We tested 120 input images form 40 people, and achieved 91.7% success rate using eye size rules and face model. The main reasons of the 8.3% failure are due to components adjacent to eyes such as eyebrows. To detect facial components from the failed images, we are developing a mouth region processing module.

  • PDF

A Pilot Study on Evoked Potentials by Visual Stimulation of Facial Emotion in Different Sasang Constitution Types (얼굴 표정 시각자극에 따른 사상 체질별 유발뇌파 예비연구)

  • Hwang, Dong-Uk;Kim, Keun-Ho;Lee, Yu-Jung;Lee, Jae-Chul;Kim, Myoyung-Geun;Kim, Jong-Yeol
    • Journal of Sasang Constitutional Medicine
    • /
    • v.22 no.1
    • /
    • pp.41-48
    • /
    • 2010
  • 1. Objective There has been a few trials to diagnose Sasang Constitution by using EEG, but has not been studied intensively. For the purpose of practical diagnosis, the characteristics of EEG for each constitution should be studied first. Recently it has been shown that Sasang Constitution might be related to harm avoidance and novelty seeking in temperament and character profiles. Based on this finding, we propose a visual stimulation method to evoke a EEG response which may discriminate difference between constitutional groups. Through the experiment with this method, we tried to reveal the characteristics of EEG of each constitutional groups by the method of event-related potentials. 2. Methods: We used facial visual stimulation to verify the characteristics of EEG for each constitutional groups. To reveal characteristic in sensitivity and latency of response, we added several levels of noise to facial images. 6 male subjects(2 Taeeumin, 2 Soyangin, 2 Soeumin) participated in this study. All subjects are healthy 20's. To remove artifacts and slow modulation, we removed EOG contaminated data and renormalization is applied. To extract stimulation related components, normalized event-related potential method was used. 3. Results: From Oz channels, it is verified that facial image processing components are extracted. For lower level noise, components related to the visual stimulation were clearly shown in Oz, Pz, and Cz channels. Pz and Cz channels show differences among 3 constitutional groups in maximum around 200 msec. Especially moderate level of noise looks appropriate for diagnosis. 4. Conclusion: We verified that the visual stimulation with facial emotion might be a good candidate to evoke the differences between constitutional groups in EEG response. The differences shown in the experiment may imply that the process of emotion has distinct tendencies in latencies and sensitivity for each consitutional group. And this distinction might be related to the temperament profile of consitutional groups.

Global Feature Extraction and Recognition from Matrices of Gabor Feature Faces

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.207-211
    • /
    • 2011
  • This paper presents a method for facial feature representation and recognition from the Covariance Matrices of the Gabor-filtered images. Gabor filters are a very powerful tool for processing images that respond to different local orientations and wave numbers around points of interest, especially on the local features on the face. This is a very unique attribute needed to extract special features around the facial components like eyebrows, eyes, mouth and nose. The Covariance matrices computed on Gabor filtered faces are adopted as the feature representation for face recognition. Geodesic distance measure is used as a matching measure and is preferred for its global consistency over other methods. Geodesic measure takes into consideration the position of the data points in addition to the geometric structure of given face images. The proposed method is invariant and robust under rotation, pose, or boundary distortion. Tests run on random images and also on publicly available JAFFE and FRAV3D face recognition databases provide impressively high percentage of recognition.