• Title/Summary/Keyword: facial classification

Search Result 243, Processing Time 0.022 seconds

Sasang Constitution Detection Based on Facial Feature Analysis Using Explainable Artificial Intelligence (설명가능한 인공지능을 활용한 안면 특징 분석 기반 사상체질 검출)

  • Jeongkyun Kim;Ilkoo Ahn;Siwoo Lee
    • Journal of Sasang Constitutional Medicine
    • /
    • v.36 no.2
    • /
    • pp.39-48
    • /
    • 2024
  • Objectives The aim was to develop a method for detecting Sasang constitution based on the ratio of facial landmarks and provide an objective and reliable tool for Sasang constitution classification. Methods Facial images, KS-15 scores, and certainty scores were collected from subjects identified by Korean Medicine Data Center. Facial ratio landmarks were detected, yielding 2279 facial ratio features. Tree-based models were trained to classify Sasang constitution, and Shapley Additive Explanations (SHAP) analysis was employed to identify important facial features. Additionally, Body Mass Index (BMI) and personality questionnaire were incorporated as supplementary information to enhance model performance. Results Using the Tree-based models, the accuracy for classifying Taeeum, Soeum, and Soyang constitutions was 81.90%, 90.49%, and 81.90% respectively. SHAP analysis revealed important facial features, while the inclusion of BMI and personality questionnaire improved model performance. This demonstrates that facial ratio-based Sasang constitution analysis yields effective and accurate classification results. Conclusions Facial ratio-based Sasang constitution analysis provides rapid and objective results compared to traditional methods. This approach holds promise for enhancing personalized medicine in Korean traditional medicine.

Landmark Selection Using CNN-Based Heat Map for Facial Age Prediction (안면 연령 예측을 위한 CNN기반의 히트 맵을 이용한 랜드마크 선정)

  • Hong, Seok-Mi;Yoo, Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.1-6
    • /
    • 2021
  • The purpose of this study is to improve the performance of the artificial neural network system for facial image analysis through the image landmark selection technique. For landmark selection, a CNN-based multi-layer ResNet model for classification of facial image age is required. From the configured ResNet model, a heat map that detects the change of the output node according to the change of the input node is extracted. By combining a plurality of extracted heat maps, facial landmarks related to age classification prediction are created. The importance of each pixel location can be analyzed through facial landmarks. In addition, by removing the pixels with low weights, a significant amount of input data can be reduced.

Improvement of Facial Emotion Recognition Performance through Addition of Geometric Features (기하학적 특징 추가를 통한 얼굴 감정 인식 성능 개선)

  • Hoyoung Jung;Hee-Il Hahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.155-161
    • /
    • 2024
  • In this paper, we propose a new model by adding landmark information as a feature vector to the existing CNN-based facial emotion classification model. Facial emotion classification research using CNN-based models is being studied in various ways, but the recognition rate is very low. In order to improve the CNN-based models, we propose algorithms that improves facial expression classification accuracy by combining the CNN model with a landmark-based fully connected network obtained by ASM. By including landmarks in the CNN model, the recognition rate was improved by several percent, and experiments confirmed that further improved results could be obtained by adding FACS-based action units to the landmarks.

People Counting System by Facial Age Group (얼굴 나이 그룹별 피플 카운팅 시스템)

  • Ko, Ginam;Lee, YongSub;Moon, Nammee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.69-75
    • /
    • 2014
  • Existing People Counting System using a single overhead mounted camera has limitation in object recognition and counting in various environments. Those limitations are attributable to overlapping, occlusion and external factors, such as over-sized belongings and dramatic light change. Thus, this paper proposes the new concept of People Counting System by Facial Age Group using two depth cameras, at overhead and frontal viewpoints, in order to improve object recognition accuracy and robust people counting to external factors. The proposed system is counting the pedestrians by five process such as overhead image processing, frontal image processing, identical object recognition, facial age group classification and in-coming/out-going counting. The proposed system developed by C++, OpenCV and Kinect SDK, and it target group of 40 people(10 people by each age group) was setup for People Counting and Facial Age Group classification performance evaluation. The experimental results indicated approximately 98% accuracy in People Counting and 74.23% accuracy in the Facial Age Group classification.

Sasang Constitution Classification using Convolutional Neural Network on Facial Images (콘볼루션 신경망 기반의 안면영상을 이용한 사상체질 분류)

  • Ahn, Ilkoo;Kim, Sang-Hyuk;Jeong, Kyoungsik;Kim, Hoseok;Lee, Siwoo
    • Journal of Sasang Constitutional Medicine
    • /
    • v.34 no.3
    • /
    • pp.31-40
    • /
    • 2022
  • Objectives Sasang constitutional medicine is a traditional Korean medicine that classifies humans into four constitutions in consideration of individual differences in physical, psychological, and physiological characteristics. In this paper, we proposed a method to classify Taeeum person (TE) and Non-Taeeum person (NTE), Soeum person (SE) and Non-Soeum person (NSE), and Soyang person (ST) and Non-Soyang person (NSY) using a convolutional neural network with only facial images. Methods Based on the convolutional neural network VGG16 architecture, transfer learning is carried out on the facial images of 3738 subjects to classify TE and NTE, SE and NSE, and SY and NSY. Data augmentation techniques are used to increase classification performance. Results The classification performance of TE and NTE, SE and NSE, and SY and NSY was 77.24%, 85.17%, and 80.18% by F1 score and 80.02%, 85.96%, and 72.76% by Precision-Recall AUC (Area Under the receiver operating characteristic Curve) respectively. Conclusions It was found that Soeum person had the most heterogeneous facial features as it had the best classification performance compared to the rest of the constitution, followed by Taeeum person and Soyang person. The experimental results showed that there is a possibility to classify constitutions only with facial images. The performance is expected to increase with additional data such as BMI or personality questionnaire.

A Study on the Facial Color & Shape of an Elderly Women (노인여성의 얼굴색과 얼굴 형태 분석)

  • Kim, Ae-Kyung;Lee, Kyung-Hee
    • Fashion & Textile Research Journal
    • /
    • v.11 no.1
    • /
    • pp.103-111
    • /
    • 2009
  • This study is to help make-up and coordination for image-making after analysis of facial color and shape of elderly women. The data was analyzed 55-75 years old 212 elderly women's face color and pictures by means of SPSS 12.0 statistics package. On the basis of the colorimetric data on face by Minolta CM2500D, this research considered the analysis of facial color, patternization of facial color and its analysis by age group; for the analysis of facial shape, this research patternized facial shape and analyzed its characteristic using both contour-based facial shape analysis and Kamata facial shape analysis. As for facial color, it was found that the lower age bracket has bright and reddish face, looking fine, while the higher age bracket has a conspicuously yellowish face, looking bad. The community of facial color is classified as 3 types and it was found out that the facial color of the subjects belonging to Type 3, whose L value is the largest, looked the brightest; the face of the subjects belonging to Type 2, whose a value is the largest, was much tinged with red, and the face of the subjects belonging to Type 1, whose b value is the largest were tinged with yellow. According to the analysis of facial shape, there appeared oval & long forms in the classification by contour, while there appeared a lot of downward-directed power and inner-directed power in the classification by Kamata, which is believed to reflect the phenomenon that their chin line becomes roundish and the facial length also tend to be longer due to aging.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

Affective Computing in Education: Platform Analysis and Academic Emotion Classification

  • So, Hyo-Jeong;Lee, Ji-Hyang;Park, Hyun-Jin
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.8-17
    • /
    • 2019
  • The main purpose of this study isto explore the potential of affective computing (AC) platforms in education through two phases ofresearch: Phase I - platform analysis and Phase II - classification of academic emotions. In Phase I, the results indicate that the existing affective analysis platforms can be largely classified into four types according to the emotion detecting methods: (a) facial expression-based platforms, (b) biometric-based platforms, (c) text/verbal tone-based platforms, and (c) mixed methods platforms. In Phase II, we conducted an in-depth analysis of the emotional experience that a learner encounters in online video-based learning in order to establish the basis for a new classification system of online learner's emotions. Overall, positive emotions were shown more frequently and longer than negative emotions. We categorized positive emotions into three groups based on the facial expression data: (a) confidence; (b) excitement, enjoyment, and pleasure; and (c) aspiration, enthusiasm, and expectation. The same method was used to categorize negative emotions into four groups: (a) fear and anxiety, (b) embarrassment and shame, (c) frustration and alienation, and (d) boredom. Drawn from the results, we proposed a new classification scheme that can be used to measure and analyze how learners in online learning environments experience various positive and negative emotions with the indicators of facial expressions.

Facial Expression Recognition by Combining Adaboost and Neural Network Algorithms (에이다부스트와 신경망 조합을 이용한 표정인식)

  • Hong, Yong-Hee;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.806-813
    • /
    • 2010
  • Human facial expression shows human's emotion most exactly, so it can be used as the most efficient tool for delivering human's intention to computer. For fast and exact recognition of human's facial expression on a 2D image, this paper proposes a new method which integrates an Discrete Adaboost classification algorithm and a neural network based recognition algorithm. In the first step, Adaboost algorithm finds the position and size of a face in the input image. Second, input detected face image into 5 Adaboost strong classifiers which have been trained for each facial expressions. Finally, neural network based recognition algorithm which has been trained with the outputs of Adaboost strong classifiers determines final facial expression result. The proposed algorithm guarantees the realtime and enhanced accuracy by utilizing fastness and accuracy of Adaboost classification algorithm and reliability of neural network based recognition algorithm. In this paper, the proposed algorithm recognizes five facial expressions such as neutral, happiness, sadness, anger and surprise and achieves 86~95% of accuracy depending on the expression types in real time.

The branching patterns and termination points of the facial artery: a cadaveric anatomical study

  • Vu Hoang Nguyen;Lin Cheng-Kuan;Tuan Anh Nguyen;Trang Huu Ngoc Thao Cai
    • Archives of Craniofacial Surgery
    • /
    • v.25 no.2
    • /
    • pp.77-84
    • /
    • 2024
  • Background: The facial artery is an important blood vessel responsible for supplying the anterior face. Understanding the branching patterns of the facial artery plays a crucial role in various medical specialties such as plastic surgery, dermatology, and oncology. This knowledge contributes to improving the success rate of facial reconstruction and aesthetic procedures. However, debate continues regarding the classification of facial artery branching patterns in the existing literature. Methods: We conducted a comprehensive anatomical study, in which we dissected 102 facial arteries from 52 embalmed and formaldehyde-fixed Vietnamese cadavers at the Anatomy Department, University of Medicine and Pharmacy, Ho Chi Minh City, Vietnam. Results: Our investigation revealed eight distinct termination points and identified 35 combinations of branching patterns, including seven arterial branching patterns. These termination points included the inferior labial artery, superior labial artery, inferior alar artery, lateral nasal artery, angular artery typical, angular artery running along the lower border of the orbicularis oculi muscle, forehead branch, duplex, and short course (hypoplastic). Notably, the branching patterns of the facial artery displayed marked asymmetry between the left and right sides within the same cadaver. Conclusion: The considerable variation observed in the branching pattern and termination points of the facial artery makes it challenging to establish a definitive classification system for this vessel. Therefore, it is imperative to develop an anatomical map summarizing the major measurements and geometric features of the facial artery. Surgeons and medical professionals involved in facial surgery and procedures must consider the detailed anatomy and relative positioning of the facial artery to minimize the risk of unexpected complications.