• Title/Summary/Keyword: facial classification

Search Result 243, Processing Time 0.023 seconds

Orbicularis oris muscle reconstruction and cheiloplasty with Z-plasty in a patient with a transverse facial cleft

  • Koh, Sung-Hyuk;Jeong, Yeon-Woo;Han, Jeong Joon;Jung, Seunggon;Kook, Min-Suk;Oh, Hee-Kyun;Park, Hong-Ju
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.41
    • /
    • pp.55.1-55.7
    • /
    • 2019
  • Background: Transverse facial clefts are Tessier's number 7 facial cleft among numbers 1-15 in Tessier's classification of craniofacial malformations, which varies from a simple widening oral commissure to a complete fissure extending towards the external ear. Case presentation: In a patient with a transverse facial cleft, to functionally arrange the orbicularis oris muscle and form the oral commissure naturally, we performed a surgical procedure including orbicularis oris muscle reconstruction and cheiloplasty with Z-plasty. Conclusion: We achieved good results functionally and esthetically by orbicularis oris muscle reconstruction and cheiloplasty with Z-plasty. The surgical modality of our anatomical repair and 3 months follow-up results are presented.

TREATMENT OF FACIAL ASYMMETRY : REPORT OF 2 CASES (비대칭 안모의 치험 2례)

  • Lee, Chul-Woo;Yeo, Hwan-Ho;Kim, Young-Gyun;Sul, In-Taek;Hyun, Yong-Hyu
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.14 no.4
    • /
    • pp.305-313
    • /
    • 1992
  • Facial asymmetry can be most distressing for the young child and parents. It can cause functional problems as a result of malocclusion. Classification of facial asymmetry has not been yet well-organized because of its varieties on etiologic factors, involved sites and clinical expressions. Even though, we don't know its causes definitely. It is generally believed that problems with aberrant pattern of condylar growth are related to facial asymmetry. This is a case report on surgical correction of the patients who had severe facial asymmetry. One patient was diagnosed as condylar hyperplasia and the other was diagnosed as a condylar hypoplasia related to trauma. We performed a simultaneous two-jaw surgery, condylar shaving, inferior border ostectomy of affected mandible in the former case, and a simultaneous two-jaw surgery, reverse-L osteotomy and alloplastic implantation with $Biocoral^{TM}$ in the latter case. The postoperative results of the two cases were excellent functionally and esthetically.

  • PDF

Extreme Learning Machine Ensemble Using Bagging for Facial Expression Recognition

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.443-458
    • /
    • 2014
  • An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly.

Facial Gender Recognition via Low-rank and Collaborative Representation in An Unconstrained Environment

  • Sun, Ning;Guo, Hang;Liu, Jixin;Han, Guang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4510-4526
    • /
    • 2017
  • Most available methods of facial gender recognition work well under a constrained situation, but the performances of these methods have decreased significantly when they are implemented under unconstrained environments. In this paper, a method via low-rank and collaborative representation is proposed for facial gender recognition in the wild. Firstly, the low-rank decomposition is applied to the face image to minimize the negative effect caused by various corruptions and dynamical illuminations in an unconstrained environment. And, we employ the collaborative representation to be as the classifier, which using the much weaker $l_2-norm$ sparsity constraint to achieve similar classification results but with significantly lower complexity. The proposed method combines the low-rank and collaborative representation to an organic whole to solve the task of facial gender recognition under unconstrained environments. Extensive experiments on three benchmarks including AR, CAS-PERL and YouTube are conducted to show the effectiveness of the proposed method. Compared with several state-of-the-art algorithms, our method has overwhelming superiority in the aspects of accuracy and robustness.

Analysis of Facial Coloration in Accordance with the Type of Personal Color System of Female University Students (여대생의 퍼스널 컬러 시스템 유형에 따른 얼굴색 분석)

  • Lee, Eun-Young;Park, Kil-Soon
    • The Research Journal of the Costume Culture
    • /
    • v.20 no.2
    • /
    • pp.144-153
    • /
    • 2012
  • This study performed a simultaneous sensory evaluation and color measurement, targeting 136 female university students who live in the Dae-Jeon region. the study measured participants'facial coloration under the condition of available light between 11 AM and 3 PM from Spring (May) to Autumn (October) in 2009. For statistical analysis, descriptive statistics, a member variate analysis, and discriminant analysis were executed using SPSS version 18.0 of the statistics program. The results of this study are as follows. First, as a result of the sensory evaluation, the blue undertone well matched to face type was dominantly distributed among the female university student participants. Second, the forehead showed a type of yellowish coloration and was relatively dark to cheeks. However the cheek displayed a reddish coloration and was relatively bright compared to the forehead from an evaluation of a cheek and forehead color measurement. Third, due to the investigation the of facial coloration variable, a yellowish and reddish chromaticity on the cheek were evident as a variable of facial coloration, which has an influence on the classification of the types of facial color. As a result of the induced discriminant through these two color variables, the yellowish chromaticity appeared as a color variable to have a greater influence than the reddish chromaticity on the cheek.

Geometrical Feature-Based Detection of Pure Facial Regions (기하학적 특징에 기반한 순수 얼굴영역 검출기법)

  • 이대호;박영태
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.773-779
    • /
    • 2003
  • Locating exact position of facial components is a key preprocessing for realizing highly accurate and reliable face recognition schemes. In this paper, we propose a simple but powerful method for detecting isolated facial components such as eyebrows, eyes, and a mouth, which are horizontally oriented and have relatively dark gray levels. The method is based on the shape-resolving locally optimum thresholding that may guarantee isolated detection of each component. We show that pure facial regions can be determined by grouping facial features satisfying simple geometric constraints on unique facial structure. In the test for over 1000 images in the AR -face database, pure facial regions were detected correctly for each face image without wearing glasses. Very few errors occurred in the face images wearing glasses with a thick frame because of the occluded eyebrow -pairs. The proposed scheme may be best suited for the later stage of classification using either the mappings or a template matching, because of its capability of handling rotational and translational variations.

Analysis of Facial Movement According to Opposite Emotions (상반된 감성에 따른 안면 움직임 차이에 대한 분석)

  • Lee, Eui Chul;Kim, Yoon-Kyoung;Bea, Min-Kyoung;Kim, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, a study on facial movements are analyzed in terms of opposite emotion stimuli by image processing of Kinect facial image. To induce two opposite emotion pairs such as "Sad - Excitement"and "Contentment - Angry" which are oppositely positioned onto Russell's 2D emotion model, both visual and auditory stimuli are given to subjects. Firstly, 31 main points are chosen among 121 facial feature points of active appearance model obtained from Kinect Face Tracking SDK. Then, pixel changes around 31 main points are analyzed. In here, local minimum shift matching method is used in order to solve a problem of non-linear facial movement. At results, right and left side facial movements were occurred in cases of "Sad" and "Excitement" emotions, respectively. Left side facial movement was comparatively more occurred in case of "Contentment" emotion. In contrast, both left and right side movements were occurred in case of "Angry" emotion.

A Software Error Examination of 3D Automatic Face Recognition Apparatus(3D-AFRA) : Measurement of Facial Figure Data (3차원 안면자동인식기(3D-AFRA)의 Software 정밀도 검사 : 형상측정프로그램 오차분석)

  • Seok, Jae-Hwa;Song, Jung-Hoon;Kim, Hyun-Jin;Yoo, Jung-Hee;Kwak, Chang-Kyu;Lee, Jun-Hee;Kho, Byung-Hee;Kim, Jong-Won;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.19 no.3
    • /
    • pp.51-61
    • /
    • 2007
  • 1. Objectives The Face is an important standard for the classification of Sasang Constitutions. We are developing 3D Automatic Face Recognition Apparatus(3D-AFRA) to analyse the facial characteristics. This apparatus show us 3D image and data of man's face and measure facial figure data. So We should examine the Measurement of Facial Figure data error of 3D Automatic Face Recognition Apparatus(3D-AFRA) in Software Error Analysis. 2. Methods We scanned face status by using 3D Automatic Face Recognition Apparatus(3D-AFRA). And we measured lengths Between Facial Definition Parameters of facial figure data by Facial Measurement program. 2.1 Repeatability test We measured lengths Between Facial Definition Parameters of facial figure data restored by 3D-AFRA by Facial Measurement program 10 times. Then we compared 10 results each other for repeatability test. 2.2 Measurement error test We measured lengths Between Facial Definition Parameters of facial figure data by two different measurement program that are Facial Measurement program and Rapidform2006. At measuring lengths Between Facial Definition Parameters, we uses two measurement way. The one is straight line measurement, the other is curved line measurement. Then we compared results measured by Facial Measurement program with results measured by Rapidform2006. 3. Results and Conclusions In repeatability test, standard deviation of results is 0.084-0.450mm. And in straight line measurement error test, the average error 0.0582mm, and the maximum error was 0.28mm. In curved line measurement error test, the average error 0.413mm, and the maximum error was 1.53mm. In conclusion, we assessed that the accuracy and repeatability of Facial Measurement program is considerably good. From now on we complement accuracy of 3D-AFRA in Hardware and Software.

  • PDF

Sasang Constitution Classification System by Morphological Feature Extraction of Facial Images

  • Lee, Hye-Lim;Cho, Jin-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.8
    • /
    • pp.15-21
    • /
    • 2015
  • This study proposed a Sasang constitution classification system that can increase the objectivity and reliability of Sasang constitution diagnosis using the image of frontal face, in order to solve problems in the subjective classification of Sasang constitution based on Sasang constitution specialists' experiences. For classification, characteristics indicating the shapes of the eyes, nose, mouth and chin were defined, and such characteristics were extracted using the morphological statistic analysis of face images. Then, Sasang constitution was classified through a SVM (Support Vector Machine) classifier using the extracted characteristics as its input, and according to the results of experiment, the proposed system showed a correct recognition rate of 93.33%. Different from existing systems that designate characteristic points directly, this system showed a high correct recognition rate and therefore it is expected to be useful as a more objective Sasang constitution classification system.

Homogeneous and Non-homogeneous Polynomial Based Eigenspaces to Extract the Features on Facial Images

  • Muntasa, Arif
    • Journal of Information Processing Systems
    • /
    • v.12 no.4
    • /
    • pp.591-611
    • /
    • 2016
  • High dimensional space is the biggest problem when classification process is carried out, because it takes longer time for computation, so that the costs involved are also expensive. In this research, the facial space generated from homogeneous and non-homogeneous polynomial was proposed to extract the facial image features. The homogeneous and non-homogeneous polynomial-based eigenspaces are the second opinion of the feature extraction of an appearance method to solve non-linear features. The kernel trick has been used to complete the matrix computation on the homogeneous and non-homogeneous polynomial. The weight and projection of the new feature space of the proposed method have been evaluated by using the three face image databases, i.e., the YALE, the ORL, and the UoB. The experimental results have produced the highest recognition rate 94.44%, 97.5%, and 94% for the YALE, ORL, and UoB, respectively. The results explain that the proposed method has produced the higher recognition than the other methods, such as the Eigenface, Fisherface, Laplacianfaces, and O-Laplacianfaces.