• Title/Summary/Keyword: facial landmark points

Search Result 12, Processing Time 0.018 seconds

Quantification of three-dimensional facial asymmetry for diagnosis and postoperative evaluation of orthognathic surgery

  • Cao, Hua-Lian;Kang, Moon-Ho;Lee, Jin-Yong;Park, Won-Jong;Choung, Han-Wool;Choung, Pill-Hoon
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.42
    • /
    • pp.17.1-17.11
    • /
    • 2020
  • Background: To evaluate the facial asymmetry, three-dimensional computed tomography (3D-CT) has been used widely. This study proposed a method to quantify facial asymmetry based on 3D-CT. Methods: The normal standard group consisted of twenty-five male subjects who had a balanced face and normal occlusion. Five anatomical landmarks were selected as reference points and ten anatomical landmarks were selected as measurement points to evaluate facial asymmetry. The formula of facial asymmetry index was designed by using the distances between the landmarks. The index value on a specific landmark indicated zero when the landmarks were located on the three-dimensional symmetric position. As the asymmetry of landmarks increased, the value of facial asymmetry index increased. For ten anatomical landmarks, the mean value of facial asymmetry index on each landmark was obtained in the normal standard group. Facial asymmetry index was applied to the patients who had undergone orthognathic surgery. Preoperative facial asymmetry and postoperative improvement were evaluated. Results: The reference facial asymmetry index on each landmark in the normal standard group was from 1.77 to 3.38. A polygonal chart was drawn to visualize the degree of asymmetry. In three patients who had undergone orthognathic surgery, it was checked that the method of facial asymmetry index showed the preoperative facial asymmetry and the postoperative improvement well. Conclusions: The current new facial asymmetry index could efficiently quantify the degree of facial asymmetry from 3D-CT. This method could be used as an evaluation standard for facial asymmetry analysis.

Robust 3D Facial Landmark Detection Using Angular Partitioned Spin Images (각 분할 스핀 영상을 사용한 3차원 얼굴 특징점 검출 방법)

  • Kim, Dong-Hyun;Choi, Kang-Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.199-207
    • /
    • 2013
  • Spin images representing efficiently surface features of 3D mesh models have been used to detect facial landmark points. However, at a certain point, different normal direction can lead to quite different spin images. Moreover, since 3D points are projected to the 2D (${\alpha}-{\beta}$) space during spin image generation, surface features cannot be described clearly. In this paper, we present a method to detect 3D facial landmark using improved spin images by partitioning the search area with respect to angle. By generating sub-spin images for angular partitioned 3D spaces, more unique features describing corresponding surfaces can be obtained, and improve the performance of landmark detection. In order to generate spin images robust to inaccurate surface normal direction, we utilize on averaging surface normal with its neighboring normal vectors. The experimental results show that the proposed method increases the accuracy in landmark detection by about 34% over a conventional method.

Analysis of facial expression recognition (표정 분류 연구)

  • Son, Nayeong;Cho, Hyunsun;Lee, Sohyun;Song, Jongwoo
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.5
    • /
    • pp.539-554
    • /
    • 2018
  • Effective interaction between user and device is considered an important ability of IoT devices. For some applications, it is necessary to recognize human facial expressions in real time and make accurate judgments in order to respond to situations correctly. Therefore, many researches on facial image analysis have been preceded in order to construct a more accurate and faster recognition system. In this study, we constructed an automatic recognition system for facial expressions through two steps - a facial recognition step and a classification step. We compared various models with different sets of data with pixel information, landmark coordinates, Euclidean distances among landmark points, and arctangent angles. We found a fast and efficient prediction model with only 30 principal components of face landmark information. We applied several prediction models, that included linear discriminant analysis (LDA), random forests, support vector machine (SVM), and bagging; consequently, an SVM model gives the best result. The LDA model gives the second best prediction accuracy but it can fit and predict data faster than SVM and other methods. Finally, we compared our method to Microsoft Azure Emotion API and Convolution Neural Network (CNN). Our method gives a very competitive result.

Study on Weight Summation Storage Algorithm of Facial Recognition Landmark (가중치 합산 기반 안면인식 특징점 저장 알고리즘 연구)

  • Jo, Seonguk;You, Youngkyon;Kwak, Kwangjin;Park, Jeong-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.163-170
    • /
    • 2022
  • This paper introduces a method of extracting facial features due to unrefined inputs in real life and improving the problem of not guaranteeing the ideal performance and speed of the object recognition model through a storage algorithm through weight summation. Many facial recognition processes ensure accuracy in ideal situations, but the problem of not being able to cope with numerous biases that can occur in real life is drawing attention, which may soon lead to serious problems in the face recognition process closely related to security. This paper presents a method of quickly and accurately recognizing faces in real time by comparing feature points extracted as input with a small number of feature points that are not overfit to multiple biases, using that various variables such as picture composition eventually take an average form.

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.2
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

A Facial Morphing Method Using Delaunay Triangle of Facial Landmarks (얼굴 랜드마크의 들로네 삼각망을 이용한 얼굴 모핑 기법)

  • Park, Kyung Nam
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.213-220
    • /
    • 2018
  • Face morphing, one of the most powerful image processing techniques that are often used in image processing and computer graphic fields, as it is a technique to change the image progressively and naturally from the original image to the target image. In this paper, we propose a method to generate Delaunay triangles using the facial landmark vertices generated by the Dlib face landmark detector and to implement morphing through warping and cross dissolving of Delaunay triangles between the original image and the target image. In this paper, we generate vertex points for face not manually but automatically, which is the major feature of the face such as eye, eyebrow, nose, and mouth, and is used to generate Delaunay triangles automatically which is the main characteristic of our face morphing method. Simulations show that we can add vertices manually and get more natural morphing results.

The Association between Facial Morphology and Cold Pattern

  • Ahn, Ilkoo;Bae, Kwang-Ho;Jin, Hee-Jeong;Lee, Siwoo
    • The Journal of Korean Medicine
    • /
    • v.42 no.4
    • /
    • pp.102-119
    • /
    • 2021
  • Objectives: Facial diagnosis is an important part of clinical diagnosis in traditional East Asian Medicine. In this paper, using a fully automated facial shape analysis system, we show that facial morphological features are associated with cold pattern. Methods: The facial morphological features calculated from 68 facial landmarks included the angles, areas, and distances between the landmark points of each part of the face. Cold pattern severity was determined using a questionnaire and the cold pattern scores (CPS) were used for analysis. The association between facial features and CPS was calculated using Pearson's correlation coefficient and partial correlation coefficients. Results: The upper chin width and the lower chin width were negatively associated with CPS. The distance from the center point to the middle jaw and the distance from the center point to the lower jaw were negatively associated with CPS. The angle of the face outline near the ear and the angle of the chin line were positively associated with CPS. The area of the upper part of the face and the area of the face except the sensory organs were negatively associated with CPS. The number of facial morphological features that exhibited a statistically significant correlation with CPS was 37 (unadjusted). Conclusions: In this study of a Korean population, subjects with a high CPS had a more pointed chin, longer face, more angular jaw, higher eyes, and more upward corners of the mouth, and their facial sensory organs were relatively widespread.

Prediction accuracy of incisal points in determining occlusal plane of digital complete dentures

  • Kenta Kashiwazaki;Yuriko Komagamine;Sahaprom Namano;Ji-Man Park;Maiko Iwaki;Shunsuke Minakuchi;Manabu, Kanazawa
    • The Journal of Advanced Prosthodontics
    • /
    • v.15 no.6
    • /
    • pp.281-289
    • /
    • 2023
  • PURPOSE. This study aimed to predict the positional coordinates of incisor points from the scan data of conventional complete dentures and verify their accuracy. MATERIALS AND METHODS. The standard triangulated language (STL) data of the scanned 100 pairs of complete upper and lower dentures were imported into the computer-aided design software from which the position coordinates of the points corresponding to each landmark of the jaw were obtained. The x, y, and z coordinates of the incisor point (XP, YP, and ZP) were obtained from the maxillary and mandibular landmark coordinates using regression or calculation formulas, and the accuracy was verified to determine the deviation between the measured and predicted coordinate values. YP was obtained in two ways using the hamularincisive-papilla plane (HIP) and facial measurements. Multiple regression analysis was used to predict ZP. The root mean squared error (RMSE) values were used to verify the accuracy of the XP and YP. The RMSE value was obtained after crossvalidation using the remaining 30 cases of denture STL data to verify the accuracy of ZP. RESULTS. The RMSE was 2.22 for predicting XP. When predicting YP, the RMSE of the method using the HIP plane and facial measurements was 3.18 and 0.73, respectively. Cross-validation revealed the RMSE to be 1.53. CONCLUSION. YP and ZP could be predicted from anatomical landmarks of the maxillary and mandibular edentulous jaw, suggesting that YP could be predicted with better accuracy with the addition of the position of the lower border of the upper lip.

Enhancing the performance of the facial keypoint detection model by improving the quality of low-resolution facial images (저화질 안면 이미지의 화질 개선를 통한 안면 특징점 검출 모델의 성능 향상)

  • KyoungOok Lee;Yejin Lee;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.171-187
    • /
    • 2023
  • When a person's face is recognized through a recording device such as a low-pixel surveillance camera, it is difficult to capture the face due to low image quality. In situations where it is difficult to recognize a person's face, problems such as not being able to identify a criminal suspect or a missing person may occur. Existing studies on face recognition used refined datasets, so the performance could not be measured in various environments. Therefore, to solve the problem of poor face recognition performance in low-quality images, this paper proposes a method to generate high-quality images by performing image quality improvement on low-quality facial images considering various environments, and then improve the performance of facial feature point detection. To confirm the practical applicability of the proposed architecture, an experiment was conducted by selecting a data set in which people appear relatively small in the entire image. In addition, by choosing a facial image dataset considering the mask-wearing situation, the possibility of expanding to real problems was explored. As a result of measuring the performance of the feature point detection model by improving the image quality of the face image, it was confirmed that the face detection after improvement was enhanced by an average of 3.47 times in the case of images without a mask and 9.92 times in the case of wearing a mask. It was confirmed that the RMSE for facial feature points decreased by an average of 8.49 times when wearing a mask and by an average of 2.02 times when not wearing a mask. Therefore, it was possible to verify the applicability of the proposed method by increasing the recognition rate for facial images captured in low quality through image quality improvement.