• Title/Summary/Keyword: Facial image

Search Result 834, Processing Time 0.031 seconds

A Flexible Feature Matching for Automatic Facial Feature Points Detection (얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • Hwang, Suen-Ki;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.2
    • /
    • pp.12-17
    • /
    • 2010
  • An automatic facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features and the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in the image space by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the system.

  • PDF

Extreme Learning Machine Ensemble Using Bagging for Facial Expression Recognition

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.443-458
    • /
    • 2014
  • An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly.

A Flexible Feature Matching for Automatic face and Facial feature Points Detection (얼굴과 얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • 박호식;손형경;정연길;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.608-612
    • /
    • 2002
  • An automatic face and facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features md the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in the image spare by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the fare identification system.

  • PDF

A Clinical Case Report of Taeeumin Patient Diagnosed with Systemic Contact Dermatitis and Facial Flushing (전신성 접촉피부염과 안면홍조를 호소하는 태음인 환자 치험 1례)

  • Min-jung, Lee;Jiyeon, Lee;Min-woo, Hwang
    • Journal of Sasang Constitutional Medicine
    • /
    • v.34 no.4
    • /
    • pp.68-80
    • /
    • 2022
  • Objectives This study is to report a significant improvement in a Taeeumin patient with systemic contact dermatitis after eating urushiol chicken by herbal medicine treatment. Methods The patient had complaints of erythema, swelling, pruritus, scaly skin, and facial flushing. We treated the patient with an herbal medicine 'Galgeunhaegi-tang' for three months. We evaluated the treatment outcome of systemic contact dermatitis using the Three Item Severity(TIS) score every visit and facial flushing by Image color summarizer at the first and last visit. Results After the treatment, the severity of the patient's skin complaints lessened from moderate to mild. The Image color summarizer showed a minor decrease in the normalized red color level and a significant increase in the brightness level and facial color percentage. Conclusions The patient diagnosed with systemic contact dermatitis, treated with Galgeunhaegi-tang for three months, showed a significant improvement in skin complaints with brighter and even facial color.

Smart Mirror for Facial Expression Recognition Based on Convolution Neural Network (컨볼루션 신경망 기반 표정인식 스마트 미러)

  • Choi, Sung Hwan;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.200-203
    • /
    • 2021
  • This paper introduces a smart mirror technology that recognizes a person's facial expressions through image classification among several artificial intelligence technologies and presents them in a mirror. 5 types of facial expression images are trained through artificial intelligence. When someone looks at the smart mirror, the mirror recognizes my expression and shows the recognized result in the mirror. The dataset fer2013 provided by kaggle used the faces of several people to be separated by facial expressions. For image classification, the network structure is trained using convolution neural network (CNN). The face is recognized and presented on the screen in the smart mirror with the embedded board such as Raspberry Pi4.

  • PDF

A Cross-Cultural Study of Facial Awareness, Influential Factors, and Attractiveness Preferences Among Korean, Japanese, and Chinese Men and Women Evaluating Korean Women by Facial Type (한국여성의 얼굴이미지 유형별 인식영향요소와 매력선호도에 대한 한중일 남녀 비교)

  • Baek, Kyoung-Jin;Kim, Young-In
    • Journal of the Korean Society of Costume
    • /
    • v.65 no.3
    • /
    • pp.1-14
    • /
    • 2015
  • The purpose of this study is to identify cross-cultural features among Korea, China, and Japan by comparing differences in facial awareness, attractiveness preferences, and consideration of facial parts in a group of Korean, Chinese, and Japanese men and women as they evaluated the faces of Korean women in their 20s. A survey was conducted targeting male and female Korean, Chinese, and Japanese college students in their 20s. Frequency analysis, ANOVA, Duncan test, factorial analysis, and reliability analysis, MANOVA were carried out using SPSS 18.0. The results of this study are as follows: Faces of Korean women in their 20s were evaluated by Korean, Chinese, and Japanese men and women in their 20s and were classified into four categories as 'Youthfulness', 'Classiness', 'Friendliness' and 'Activeness'. Differences in facial image awareness were observed depending on nationality and gender. Korean participants were found to place importance on overall morphological factors; The Japanese focused on the eyes; and the Chinese on the skin color. Women of all nationalities showed, on average, a higher awareness of facial parts than men. No significant differences in facial attractiveness preferences were found based on nationality or gender, but there were differences in how the participants evaluated faces for attractiveness, showing that reasons for preferences may vary even if the preferences are the same.

Face Detection using Orientation(In-Plane Rotation) Invariant Facial Region Segmentation and Local Binary Patterns(LBP) (방향 회전에 불변한 얼굴 영역 분할과 LBP를 이용한 얼굴 검출)

  • Lee, Hee-Jae;Kim, Ha-Young;Lee, David;Lee, Sang-Goog
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.692-702
    • /
    • 2017
  • Face detection using the LBP based feature descriptor has issues in that it can not represent spatial information between facial shape and facial components such as eyes, nose and mouth. To address these issues, in previous research, a facial image was divided into a number of square sub-regions. However, since the sub-regions are divided into different numbers and sizes, the division criteria of the sub-region suitable for the database used in the experiment is ambiguous, the dimension of the LBP histogram increases in proportion to the number of sub-regions and as the number of sub-regions increases, the sensitivity to facial orientation rotation increases significantly. In this paper, we present a novel facial region segmentation method that can solve in-plane rotation issues associated with LBP based feature descriptors and the number of dimensions of feature descriptors. As a result, the proposed method showed detection accuracy of 99.0278% from a single facial image rotated in orientation.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

Analysis of Facial Movement According to Opposite Emotions (상반된 감성에 따른 안면 움직임 차이에 대한 분석)

  • Lee, Eui Chul;Kim, Yoon-Kyoung;Bea, Min-Kyoung;Kim, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, a study on facial movements are analyzed in terms of opposite emotion stimuli by image processing of Kinect facial image. To induce two opposite emotion pairs such as "Sad - Excitement"and "Contentment - Angry" which are oppositely positioned onto Russell's 2D emotion model, both visual and auditory stimuli are given to subjects. Firstly, 31 main points are chosen among 121 facial feature points of active appearance model obtained from Kinect Face Tracking SDK. Then, pixel changes around 31 main points are analyzed. In here, local minimum shift matching method is used in order to solve a problem of non-linear facial movement. At results, right and left side facial movements were occurred in cases of "Sad" and "Excitement" emotions, respectively. Left side facial movement was comparatively more occurred in case of "Contentment" emotion. In contrast, both left and right side movements were occurred in case of "Angry" emotion.

Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application (웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현)

  • 박경숙;허영남;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1310-1318
    • /
    • 2002
  • In this study, we developed a 3D facial modeler and animator that will not use the existing method by 3D scanner or camera. Without expensive image-input equipments, we can easily create 3D models only using front and side images. The system is available to animate 3D facial models as we connect to animation server on the WWW which is independent from specific platforms and softwares. It was implemented using Java 3D API. The facial modeler detects MPEG-4 FDP(Facial Definition Parameter) feature points from 2D input images, creates 3D facial model modifying generic facial model with the points. The animator animates and renders the 3D facial model according to MPEG-4 FAP(Facial Animation Parameter). This system can be used for generating an avatar on WWW.