• 제목/요약/키워드: facial features

검색결과 642건 처리시간 0.03초

설명가능한 인공지능을 활용한 안면 특징 분석 기반 사상체질 검출 (Sasang Constitution Detection Based on Facial Feature Analysis Using Explainable Artificial Intelligence)

  • 김정균;안일구;이시우
    • 사상체질의학회지
    • /
    • 제36권2호
    • /
    • pp.39-48
    • /
    • 2024
  • Objectives The aim was to develop a method for detecting Sasang constitution based on the ratio of facial landmarks and provide an objective and reliable tool for Sasang constitution classification. Methods Facial images, KS-15 scores, and certainty scores were collected from subjects identified by Korean Medicine Data Center. Facial ratio landmarks were detected, yielding 2279 facial ratio features. Tree-based models were trained to classify Sasang constitution, and Shapley Additive Explanations (SHAP) analysis was employed to identify important facial features. Additionally, Body Mass Index (BMI) and personality questionnaire were incorporated as supplementary information to enhance model performance. Results Using the Tree-based models, the accuracy for classifying Taeeum, Soeum, and Soyang constitutions was 81.90%, 90.49%, and 81.90% respectively. SHAP analysis revealed important facial features, while the inclusion of BMI and personality questionnaire improved model performance. This demonstrates that facial ratio-based Sasang constitution analysis yields effective and accurate classification results. Conclusions Facial ratio-based Sasang constitution analysis provides rapid and objective results compared to traditional methods. This approach holds promise for enhancing personalized medicine in Korean traditional medicine.

안정적인 실시간 얼굴 특징점 추적과 감정인식 응용 (Robust Real-time Tracking of Facial Features with Application to Emotion Recognition)

  • 안병태;김응희;손진훈;권인소
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구 (Study for Classification of Facial Expression using Distance Features of Facial Landmarks)

  • 배진희;왕보현;임준식
    • 전기전자학회논문지
    • /
    • 제25권4호
    • /
    • pp.613-618
    • /
    • 2021
  • 표정 인식은 다양한 분야에서 지속적인 연구의 주제로서 자리 잡아 왔다. 본 논문에서는 얼굴 이미지 랜드마크 간의 거리를 계산하여 추출된 특징을 사용해 각 랜드마크들의 관계를 분석하고 5가지의 표정을 분류한다. 다수의 관측자들에 의해 수행된 라벨링 작업을 기반으로 데이터와 라벨 신뢰도를 높였다. 또한 원본 데이터에서 얼굴을 인식하고 랜드마크 좌표를 추출해 특징으로 사용하였으며 유전 알고리즘을 이용해 상대적으로 분류에 더 도움이 되는 특징을 선택하였다. 본 논문에서 제안한 방법을 이용하여 표정 인식 분류를 수행하였으며 제안된 방법을 이용하였을 때가 CNN을 이용하여 분류를 수행하였을 때 보다 성능이 향상됨을 볼 수 있었다.

Facial Feature Extraction with Its Applications

  • Lee, Minkyu;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • 제2권1호
    • /
    • pp.7-9
    • /
    • 2015
  • Purpose In the many face-related application such as head pose estimation, 3D face modeling, facial appearance manipulation, the robust and fast facial feature extraction is necessary. We present the facial feature extraction method based on shape regression and feature selection for real-time facial feature extraction. Materials and Methods The facial features are initialized by statistical shape model and then the shape of facial features are deformed iteratively according to the texture pattern which is selected on the feature pool. Results We obtain fast and robust facial feature extraction result with error less than 4% and processing time less than 12 ms. The alignment error is measured by average of ratio of pixel difference to inter-ocular distance. Conclusion The accuracy and processing time of the method is enough to apply facial feature based application and can be used on the face beautification or 3D face modeling.

The Association between Facial Morphology and Cold Pattern

  • Ahn, Ilkoo;Bae, Kwang-Ho;Jin, Hee-Jeong;Lee, Siwoo
    • 대한한의학회지
    • /
    • 제42권4호
    • /
    • pp.102-119
    • /
    • 2021
  • Objectives: Facial diagnosis is an important part of clinical diagnosis in traditional East Asian Medicine. In this paper, using a fully automated facial shape analysis system, we show that facial morphological features are associated with cold pattern. Methods: The facial morphological features calculated from 68 facial landmarks included the angles, areas, and distances between the landmark points of each part of the face. Cold pattern severity was determined using a questionnaire and the cold pattern scores (CPS) were used for analysis. The association between facial features and CPS was calculated using Pearson's correlation coefficient and partial correlation coefficients. Results: The upper chin width and the lower chin width were negatively associated with CPS. The distance from the center point to the middle jaw and the distance from the center point to the lower jaw were negatively associated with CPS. The angle of the face outline near the ear and the angle of the chin line were positively associated with CPS. The area of the upper part of the face and the area of the face except the sensory organs were negatively associated with CPS. The number of facial morphological features that exhibited a statistically significant correlation with CPS was 37 (unadjusted). Conclusions: In this study of a Korean population, subjects with a high CPS had a more pointed chin, longer face, more angular jaw, higher eyes, and more upward corners of the mouth, and their facial sensory organs were relatively widespread.

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • 제7권2호
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권3호
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

적응형 결정 트리를 이용한 국소 특징 기반 표정 인식 (Local Feature Based Facial Expression Recognition Using Adaptive Decision Tree)

  • 오지훈;반유석;이인재;안충현;이상윤
    • 한국통신학회논문지
    • /
    • 제39A권2호
    • /
    • pp.92-99
    • /
    • 2014
  • 본 논문은 결정 트리(Decision tree) 구조를 기반으로 한 표정 인식 방법을 제안한다. ASM(Active Shape Model)과 LBP(Local Binary Pattern)를 통해, 표정 영상들의 국소 특징들을 추출한다. 국소 특징들로부터 표정들을 잘 분류할 수 있는 판별 특징(Discriminant feature)들을 추출하고, 그 판별 특징들은 모든 조합의 각 두 가지 표정들을 분류시킨다. 분류를 통해 얻어진 정인식의 합을 통해, 정인식 최대화 기반 국소 영역과 표정 조합을 결정한다. 이 가지 분류들을 종합하여, 결정 트리를 생성한다. 이 결정 트리 기반 표정 인식률은 약 84.7%로, 결정 트리를 고려하지 않은 방법보다, 더 좋은 인식 성능을 보였다.

DETECTION OF FACIAL FEATURES IN COLOR IMAGES WITH VARIOUS BACKGROUNDS AND FACE POSES

  • Park, Jae-Young;Kim, Nak-Bin
    • 한국멀티미디어학회논문지
    • /
    • 제6권4호
    • /
    • pp.594-600
    • /
    • 2003
  • In this paper, we propose a detection method for facial features in color images with various backgrounds and face poses. To begin with, the proposed method extracts face candidacy region from images with various backgrounds, which have skin-tone color and complex objects, via the color and edge information of face. And then, by using the elliptical shape property of face, we correct a rotation, scale, and tilt of face region caused by various poses of head. Finally, we verify the face using features of face and detect facial features. In our experimental results, it is shown that accuracy of detection is high and the proposed method can be used in pose-invariant face recognition system effectively

  • PDF

Realtime Analysis of Sasang Constitution Types from Facial Features Using Computer Vision and Machine Learning

  • Abdullah;Shah Mahsoom Ali;Hee-Cheol Kim
    • Journal of information and communication convergence engineering
    • /
    • 제22권3호
    • /
    • pp.256-266
    • /
    • 2024
  • Sasang constitutional medicine (SCM) is one of the best traditional therapeutic approaches used in Korea. SCM prioritizes personalized treatment that considers the unique constitution of an individual and encompasses their physical characteristics, personality traits, and susceptibility to specific diseases. Facial features are essential for diagnosing Sasang constitutional types (SCTs). This study aimed to develop a real-time artificial intelligence-based model for diagnosing SCTs using facial images, building an SCTs prediction model based on a machine learning method. Facial features from all images were extracted to develop this model using feature engineering and machine learning techniques. The fusion of these features was used to train the AI model. We used four machine learning algorithms, namely, random forest (RF), multilayer perceptron (MLP), gradient boosting machine (GBM), and extreme gradient boosting (XGB), to investigate SCTs. The GBM outperformed all the other models. The highest accuracy achieved in the experiment was 81%, indicating the robustness of the proposed model and suitability for real-time applications.