• Title/Summary/Keyword: Facial components

Search Result 133, Processing Time 0.027 seconds

ON CEPHALOMETRIC STUDY OF AXIAL INCLINATIONS IN RELATIONS TO THE MALOCCLUSION TYPES (부정교합유형(不正咬合類型)에 따른 치축경사도(齒軸傾斜度)에 관(關)한 두부방사선계측학적(頭部放射線計測學的) 연구(硏究))

  • Hong, Seong-Deok;Cha, Kyung-Suk
    • The korean journal of orthodontics
    • /
    • v.21 no.3
    • /
    • pp.673-683
    • /
    • 1991
  • This research was performed to find out the adaptation patterns of maxillary and mandibular posterior teeth to the changes in relationships of vertical skeletal components, which constitute the skeletofacial complex. For this research, 61 adult malocclusion patients were chosen as subjects according to the Hellman's dental age with normally ranged FMN-A-B angle. These subjects were divided into 4 groups in maxilla and 3 groups in mandible according to mesiodistal inclinations of teeth. Following results were obtained after studying the relationships of the vertical skeletal components between each group. 1. Inspire of the fact that the FMN-A-B angle was within a normal range, the degree of mesiodistal inclinations of maxillary and mandibular posterior teeth showed differences in relation to the anteroposterior relationships of maxilla and mandible. In case where the FMN-A-B angle was large, the mesial inclinations of maxillary posterior teeth showed more increase from the posterior to the anterior, whereas in mandible it showed overall decrease. 2. The degrees of mesial inclinations of mandibular posterior teeth were increased when the angulations of lower facial height, occlusal plane angle and mandibular plane angle were greater. 3. The patterns of mesial inclinations of maxillary posterior teeth were varied according to the angulation of lower facial height. If relatively large, it showed more increase from the posterior to the anterior and it was decreased nearly consistent when the angulation was small. 4. The degrees of mesial inclinations of maxillary posterior teeth were decreased as the lower facial height, palatal plane angle, occlusal plane angle and the mandibular plane angle became greater.

  • PDF

Analysis of facial expression recognition (표정 분류 연구)

  • Son, Nayeong;Cho, Hyunsun;Lee, Sohyun;Song, Jongwoo
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.5
    • /
    • pp.539-554
    • /
    • 2018
  • Effective interaction between user and device is considered an important ability of IoT devices. For some applications, it is necessary to recognize human facial expressions in real time and make accurate judgments in order to respond to situations correctly. Therefore, many researches on facial image analysis have been preceded in order to construct a more accurate and faster recognition system. In this study, we constructed an automatic recognition system for facial expressions through two steps - a facial recognition step and a classification step. We compared various models with different sets of data with pixel information, landmark coordinates, Euclidean distances among landmark points, and arctangent angles. We found a fast and efficient prediction model with only 30 principal components of face landmark information. We applied several prediction models, that included linear discriminant analysis (LDA), random forests, support vector machine (SVM), and bagging; consequently, an SVM model gives the best result. The LDA model gives the second best prediction accuracy but it can fit and predict data faster than SVM and other methods. Finally, we compared our method to Microsoft Azure Emotion API and Convolution Neural Network (CNN). Our method gives a very competitive result.

Influence of heritability on craniofacial soft tissue characteristics of monozygotic twins, dizygotic twins, and their siblings using Falconer's method and principal components analysis

  • Song, Jeongmin;Chae, Hwa Sung;Shin, Jeong Won;Sung, Joohon;Song, Yun-Mi;Baek, Seung-Hak;Kim, Young Ho
    • The korean journal of orthodontics
    • /
    • v.49 no.1
    • /
    • pp.3-11
    • /
    • 2019
  • Objective: The purpose of this study was to investigate the influence of heritability on the craniofacial soft tissue cephalometric characteristics of monozygotic (MZ) twins, dizygotic (DZ) twins, and their siblings (SIB). Methods: The samples comprised Korean adult twins and their siblings (mean age, 39.8 years; MZ group, n = 36 pairs; DZ group, n = 13 pairs of the same gender; and SIB group, n = 26 pairs of the same gender). Thirty cephalometric variables were measured to characterize facial profile, facial height, soft-tissue thickness, and projection of nose and lip. Falconer's method was used to calculate heritability (low heritability, $h^2$ < 0.2; high heritability, $h^2$ > 0.9). After principal components analysis (PCA) was performed to extract the models, we calculated the intraclass correlation coefficient (ICC) value and heritability of each component. Results: The MZ group exhibited higher ICC values for all cephalometric variables than DZ and SIB groups. Among cephalometric variables, the highest ${h^2}_{(MZ-DZ)}$ and ${h^2}_{(MZ-SIB)}$ values were observed for the nasolabial angle (NLA, 1.544 and 2.036), chin angle (1.342 and 1.112), soft tissue chin thickness (2.872 and 1.226), and upper lip thickness ratio (1.592 and 1.026). PCA derived eight components with 84.5% of a cumulative explanation. The components that exhibited higher values of ${h^2}_{(MZ-DZ)}$ and ${h^2}_{(MZ-SIB)}$ were PCA2, which includes facial convexity, NLA, and nose projection (1.026 and 0.972), and PCA7, which includes chin angle and soft tissue chin thickness (2.107 and 1.169). Conclusions: The nose and soft tissue chin were more influenced by genetic factors than other soft tissues.

A Lip Detection Algorithm Using Color Clustering (색상 군집화를 이용한 입술탐지 알고리즘)

  • Jeong, Jongmyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.37-43
    • /
    • 2014
  • In this paper, we propose a robust lip detection algorithm using color clustering. At first, we adopt AdaBoost algorithm to extract facial region and convert facial region into Lab color space. Because a and b components in Lab color space are known as that they could well express lip color and its complementary color, we use a and b component as the features for color clustering. The nearest neighbour clustering algorithm is applied to separate the skin region from the facial region and K-Means color clustering is applied to extract lip-candidate region. Then geometric characteristics are used to extract final lip region. The proposed algorithm can detect lip region robustly which has been shown by experimental results.

A Facial Region Detection using the Skin Color and Edge Information at YCbCr (YCbCr 색공간에서 피부색과 윤곽선 정보를 이용한 얼굴 영역 검출)

  • 권혁봉;권동진;장언동;윤영복;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.1
    • /
    • pp.27-34
    • /
    • 2004
  • This thesis proposes a face detection algorithm using the color and edge informations in color image. The proposed algorithm segments skin color by Cb and Cr in YCbCr coordinates. Then face candidate regions are made after morphological filtering and labeling. For the regions, the Sobel vortical operation and horizontal projection are performed in the Y luminance components. The peak value indicates the eye location. Similarly, the chin location is detected by the Sobel horizontal operation and horizontal projection. The computer simulation shows that the proposed method gains similar detection rates of previous method and prevent facial region from including neck by detection of chin.

  • PDF

Feature-Point Extraction by Dynamic Linking Model bas Wavelets and Fuzzy C-Means Clustering Algorithm (Gabor 웨이브렛과 FCM 군집화 알고리즘에 기반한 동적 연결모형에 의한 얼굴표정에서 특징점 추출)

  • 신영숙
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.11-16
    • /
    • 2003
  • This Paper extracts the edge of main components of face with Gator wavelets transformation in facial expression images. FCM(Fuzzy C-Means) clustering algorithm then extracts the representative feature points of low dimensionality from the edge extracted in neutral face. The feature-points of the neutral face is used as a template to extract the feature-points of facial expression images. To match point to Point feature points on an expression face against each feature point on a neutral face, it consists of two steps using a dynamic linking model, which are called the coarse mapping and the fine mapping. This paper presents an automatic extraction of feature-points by dynamic linking model based on Gabor wavelets and fuzzy C-means(FCM) algorithm. The result of this study was applied to extract features automatically in facial expression recognition based on dimension[1].

  • PDF

A Potential Role of fgf4, fgf24, and fgf17 in Pharyngeal Pouch Formation in Zebrafish

  • Sil Jin;Chong Pyo Choe
    • Development and Reproduction
    • /
    • v.28 no.2
    • /
    • pp.55-65
    • /
    • 2024
  • In vertebrates, Fgf signaling is essential for the development of pharyngeal pouches, which controls facial skeletal development. Genetically, fgf3 and fgf8 are required for pouch formation in mice and zebrafish. However, loss-of-function phenotypes of fgf3 and fgf8 are milder than expected in mice and zebrafish, which suggests that an additional fgf gene(s) would be involved in pouch formation. Here, we analyzed the expression, regulation, and function of three fgfs, fgf4, fgf24, and fgf17, during pouch development in zebrafish. We find that they are expressed in the distinct regions of pharyngeal endoderm in pouch formation, with fgf4 and fgf17 also being expressed in the adjacent mesoderm, in addition to previously reported endodermal fgf3 and mesodermal fgf8 expression. The endodermal expression of fgf4, fgf24, and fgf17 and the mesodermal expression of fgf4 and fgf17 are positively regulated by Tbx1 but not by Fgf3, in pouch formation. Fgf8 is required to express the endodermal expression of fgf4 and fgf24. Interestingly, however, single mutant, all double mutant combinations, and triple mutant for fgf4, fgf24, and fgf17 do not show any defects in pouches and facial skeletons. Considering a high degree of genetic redundancy in the Fgf signaling components in craniofacial development in zebrafish, our result suggests that fgf4, fgf24, and fgf17 have a potential role for pouch formation, with a redundancy with other fgf gene(s).

Face Image Analysis using Adaboost Learning and Non-Square Differential LBP (아다부스트 학습과 비정방형 Differential LBP를 이용한 얼굴영상 특징분석)

  • Lim, Kil-Taek;Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.6
    • /
    • pp.1014-1023
    • /
    • 2016
  • In this study, we presented a method for non-square Differential LBP operation that can well describe the micro pattern in the horizontal and vertical component. We proposed a way to represent a LBP operation with various direction components as well as the diagonal component. In order to verify the validity of the proposed operation, Differential LBP was investigated with respect to accuracy, sensitivity, and specificity for the classification of facial expression. In accuracy comparison proposed LBP operation obtains better results than Square LBP and LBP-CS operations. Also, Proposed Differential LBP gets better results than previous two methods in the sensitivity and specificity indicators 'Neutral', 'Happiness', 'Surprise', and 'Anger' and excellence Differential LBP was confirmed.

Comic Emotional Expression for Effective Sign-Language Communications (효율적인 수화 통신을 위한 코믹한 감정 표현)

  • ;;Shin Tanahashi;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.651-654
    • /
    • 1999
  • In this paper we propose an emotional expression method using a comic model and special marks for effective sign-language communications. Until now we have investigated to produce more realistic facial and emotional expression. When representing only emotional expression, however, a comic expression could be better than the real picture of a face. The comic face is a comic-style expression model in which almost components except the necessary parts like eyebrows, eyes, nose and mouth are discarded. In the comic model, we can use some special marks for the purpose of emphasizing various emotions. We represent emotional expression using Action Units(AU) of Facial Action Coding System(FACS) and define Special Unit(SU) for emphasizing the emotions. Experimental results show a possibility that the proposed method could be used efficiently for sign-language image communications.

  • PDF

A FACE IMAGE GENERATION SYSTEM FOR TRANSFORMING THREE DIMENSIONS OF HIGHER-ORDER IMPRESSION

  • Ishi, Hanae;Sakuta, Yuiko;Akamatsu, Shigeru;Gyoba, Jiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.703-708
    • /
    • 2009
  • The present paper describes the application of an improved impression transfer vector method (Sakurai et al., 2007) to transform the three basic dimensions (Evaluation, Activity, and Potency) of higher-order impression. First, a set of shapes and surface textures of faces was represented by multi-dimensional vectors. Second, the variation among faces was coded in reduced parameters derived by applying principal component analysis. Third, a facial attribute along a given impression dimension was analyzed to select discriminative parameters from among principal components with higher sensitivity to impressions, and obtain an impression transfer vector. Finally, the parametric coordinates were changed by adding or subtracting the impression transfer vector and the image was manipulated so that its facial appearance clearly exhibits the transformed impression. A psychological rating experiment confirmed that the impression transfer vector modulated three dimensions of higher-order impression. We discussed the versatility of the impression transfer vector method.

  • PDF