• Title/Summary/Keyword: facial features

Search Result 642, Processing Time 0.024 seconds

A Novel Cross Channel Self-Attention based Approach for Facial Attribute Editing

  • Xu, Meng;Jin, Rize;Lu, Liangfu;Chung, Tae-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2115-2127
    • /
    • 2021
  • Although significant progress has been made in synthesizing visually realistic face images by Generative Adversarial Networks (GANs), there still lacks effective approaches to provide fine-grained control over the generation process for semantic facial attribute editing. In this work, we propose a novel cross channel self-attention based generative adversarial network (CCA-GAN), which weights the importance of multiple channels of features and archives pixel-level feature alignment and conversion, to reduce the impact on irrelevant attributes while editing the target attributes. Evaluation results show that CCA-GAN outperforms state-of-the-art models on the CelebA dataset, reducing Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) by 15~28% and 25~100%, respectively. Furthermore, visualization of generated samples confirms the effect of disentanglement of the proposed model.

Side Face Features' Biometrics for Sasang Constitution (사상체질 판별을 위한 측면 얼굴 이미지에서의 특징 검출)

  • Zhang, Qian;Lee, Ki-Jung;WhangBo, Taeg-Keun
    • Journal of Internet Computing and Services
    • /
    • v.8 no.6
    • /
    • pp.155-167
    • /
    • 2007
  • There are four types of human beings according to the Sasang Typology, Oriental medical doctors frequently prescribe healthcare information and treatment depending on one's type, The feature ratios (Table 1) on the human face are the most important criterions to decide which type a patient is. In this paper, we proposed a system to extract these feature ratios from the people's side face, There are two challenges in acquiring the feature ratio: one that selecting representative features; the other, that detecting region of interest from human profile facial image effectively and calculating the feature ratio accurately. In our system, an adaptive color model is used to separate human side face from background, and the method based on geometrical model is designed for region of interest detection. Then we present the error analysis caused by image variation in terms of image size and head pose, To verify the efficiency of the system proposed in this paper, several experiments are conducted using about 173 korean's left side facial photographs. Experiment results shows that the accuracy of our system is increased 17,99% after we combine the front face features with the side face features, instead of using the front face features only.

  • PDF

A Study on Appearance-Based Facial Expression Recognition Using Active Shape Model (Active Shape Model을 이용한 외형기반 얼굴표정인식에 관한 연구)

  • Kim, Dong-Ju;Shin, Jeong-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • This paper introduces an appearance-based facial expression recognition method using ASM landmarks which is used to acquire a detailed face region. In particular, EHMM-based algorithm and SVM classifier with histogram feature are employed to appearance-based facial expression recognition, and performance evaluation of proposed method was performed with CK and JAFFE facial expression database. In addition, performance comparison was achieved through comparison with distance-based face normalization method and a geometric feature-based facial expression approach which employed geometrical features of ASM landmarks and SVM algorithm. As a result, the proposed method using ASM-based face normalization showed performance improvements of 6.39% and 7.98% compared to previous distance-based face normalization method for CK database and JAFFE database, respectively. Also, the proposed method showed higher performance compared to geometric feature-based facial expression approach, and we confirmed an effectiveness of proposed method.

A Recognition Framework for Facial Expression by Expression HMM and Posterior Probability (표정 HMM과 사후 확률을 이용한 얼굴 표정 인식 프레임워크)

  • Kim, Jin-Ok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.3
    • /
    • pp.284-291
    • /
    • 2005
  • I propose a framework for detecting, recognizing and classifying facial features based on learned expression patterns. The framework recognizes facial expressions by using PCA and expression HMM(EHMM) which is Hidden Markov Model (HMM) approach to represent the spatial information and the temporal dynamics of the time varying visual expression patterns. Because the low level spatial feature extraction is fused with the temporal analysis, a unified spatio-temporal approach of HMM to common detection, tracking and classification problems is effective. The proposed recognition framework is accomplished by applying posterior probability between current visual observations and previous visual evidences. Consequently, the framework shows accurate and robust results of recognition on as well simple expressions as basic 6 facial feature patterns. The method allows us to perform a set of important tasks such as facial-expression recognition, HCI and key-frame extraction.

THE CEPHALOMETRIC STUDY OF FACIAL TYPES IN CLASS III MALOCCLUSION (III급 부정교합자의 안모유형에 관한 연구)

  • Kim, Soo-Cheol;Lee, Ki-Soo
    • The korean journal of orthodontics
    • /
    • v.20 no.3 s.32
    • /
    • pp.519-539
    • /
    • 1990
  • It is the aim of this study to observe the distribution of various facial types in class III malocclusion and to characterize the craniofacial features of the very facial types. Cephalometric headptates of a hundred and ten persons showing bilateral class III malocclusion whose mean age was 12.51 years and sixty nine persons of normal occlusion whose mean age was 12.23 years were measured and statistically analyzed. The following summary and conclusions were drawn. 1. Affording the bases for SNA and SNB, $35.45\%$ of sample showed normally positioned maxilla and protruded mandible, $30.00\%$ for retruded maxilla and normally positioned mandible, $15.45\%$ for retruded maxilla and protruded mandible, $10.90\%$ for both maxilla and mandible within normal range and $8.20\%$ for miscellaneous types were arranged in class III malocclusion. 2. $52.72\%$ of sample showed neutrodiveigent, $35.45\%$ for hyperdivergent and $11.81\%$ manifested hypodivergent mandible in class III malocclusion. 3. Providing the bases for facial and mandibular planes, $33.63\%$ of sample showed prognathic and neutrodivergent, $20.90\%$ for mesognathic and hyperdivergent, $17.27\%$ for prognathic and hyperdivergent and $15.45\%$ for mesognathic and neutrodivergent were arranged in class III malocclusion. 4. The class III malocclusion brought out shorter cranial base, smaller saddle angle, and larger articular and genial angle. It showed retropositioned maxilla and forward positioned mandible in spite of no significant differences in linear measurements of mandible. Anterior lower facial height was significantly larger in class III malocclusion, while posterior total facial and anterior total facial heights exhibited no significant differences. 5. It is suggested class III malocclusion was attributed to shorter cranial base, smaller saddle angle, maxillary deficiency and/or retrusion, mandibular excess and/or protrusion, excessive vertical growth of the anterior lower face, and their complex as well.

  • PDF

A Study on Improvement of Face Recognition Rate with Transformation of Various Facial Poses and Expressions (얼굴의 다양한 포즈 및 표정의 변환에 따른 얼굴 인식률 향상에 관한 연구)

  • Choi Jae-Young;Whangbo Taeg-Keun;Kim Nak-Bin
    • Journal of Internet Computing and Services
    • /
    • v.5 no.6
    • /
    • pp.79-91
    • /
    • 2004
  • Various facial pose detection and recognition has been a difficult problem. The problem is due to the fact that the distribution of various poses in a feature space is mere dispersed and more complicated than that of frontal faces, This thesis proposes a robust pose-expression-invariant face recognition method in order to overcome insufficiency of the existing face recognition system. First, we apply the TSL color model for detecting facial region and estimate the direction of face using facial features. The estimated pose vector is decomposed into X-V-Z axes, Second, the input face is mapped by deformable template using this vectors and 3D CANDIDE face model. Final. the mapped face is transformed to frontal face which appropriates for face recognition by the estimated pose vector. Through the experiments, we come to validate the application of face detection model and the method for estimating facial poses, Moreover, the tests show that recognition rate is greatly boosted through the normalization of the poses and expressions.

  • PDF

Comparison Analysis of Four Face Swapping Models for Interactive Media Platform COX (인터랙티브 미디어 플랫폼 콕스에 제공될 4가지 얼굴 변형 기술의 비교분석)

  • Jeon, Ho-Beom;Ko, Hyun-kwan;Lee, Seon-Gyeong;Song, Bok-Deuk;Kim, Chae-Kyu;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.535-546
    • /
    • 2019
  • Recently, there have been a lot of researches on the whole face replacement system, but it is not easy to obtain stable results due to various attitudes, angles and facial diversity. To produce a natural synthesis result when replacing the face shown in the video image, technologies such as face area detection, feature extraction, face alignment, face area segmentation, 3D attitude adjustment and facial transposition should all operate at a precise level. And each technology must be able to be interdependently combined. The results of our analysis show that the difficulty of implementing the technology and contribution to the system in facial replacement technology has increased in facial feature point extraction and facial alignment technology. On the other hand, the difficulty of the facial transposition technique and the three-dimensional posture adjustment technique were low, but showed the need for development. In this paper, we propose four facial replacement models such as 2-D Faceswap, OpenPose, Deekfake, and Cycle GAN, which are suitable for the Cox platform. These models have the following features; i.e. these models include a suitable model for front face pose image conversion, face pose image with active body movement, and face movement with right and left side by 15 degrees, Generative Adversarial Network.

Difference in visual attention during the assessment of facial attractiveness and trustworthiness (얼굴 매력도와 신뢰성 평가에서 시각적 주의의 차이)

  • Sung, Young-Shin;Cho, Kyung-Jin;Kim, Do-Yeon;Kim, Hack-Jin
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.533-540
    • /
    • 2010
  • This study was designed to examine the difference in visual attention between the evaluations of facial attractiveness and facial trustworthiness, both of which may be the two most fundamental social evaluation for forming first impressions under various types of social interactions. In study 1, participants were asked to evaluate the attractiveness and trustworthiness of 40 new faces while their gaze directions being recorded using an eye-tracker. The analysis revealed that participants spent significantly longer gaze fixation time while examining certain facial features such as eyes and nose during the evaluation of facial trustworthiness, as compared to facial attractiveness. In study 2, participants performed the same face evaluation tasks, except that a word was briefly displayed on a certain facial feature in each face trial, which were then followed by unexpected recall tests of the previously viewed words. The analysis demonstrated that the recognition rate of the words that had been presented on the nose was significantly higher for the task of facial trustworthiness vs. facial attractiveness evaluation. These findings suggest that the evaluation of facial trustworthiness may be distinguished by that of facial attractiveness in terms of the allocation of attentional resources.

  • PDF

Salivary Duct Carcinoma in Parotid Deep Lobe, Involving the Buccal Branch of Facial Nerve : A Case Report (이하선의 심엽에 위치하며 안면신경의 볼가지를 침범한 타액관 암종 1예)

  • Kim, Jung Min;Kwak, Seul Ki;Kim, Seung Woo
    • Korean Journal of Head & Neck Oncology
    • /
    • v.28 no.2
    • /
    • pp.125-128
    • /
    • 2012
  • Salivary duct carcinoma(SDC) is a highly malignant tumor of the salivary gland. The tumor is clinically characterized by a rapid onset and progression, the neoplasm is often associated with pain and facial paralysis. The nodal recurrence rate is high, and distant metastasis is common. SDC resembles high-grade breast ductal carcinoma. Curative surgical resection and postoperative radiation were the mainstay of the treatment. If facial paralysis is present, a radical parotidectomy is mandatory. Regardless of the primary location of SDC, ipsilateral functional neck dissection is indicated, because regional lymphatic spread has to be expected in the majority of patients already at time of diagnosis. If there is minor gland involvement, a bilateral neck dissection should be performed, because lymphatic drainage may occur to the contralateral side. The survival of SDC patient is poor, with most dying within three years. We experienced a unique case of SDC in parotid deep lobe. We report the clinicopathologic features of this tumor with a review of literature.

Recognition of Facial Emotion Using Multi-scale LBP (멀티스케일 LBP를 이용한 얼굴 감정 인식)

  • Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.12
    • /
    • pp.1383-1392
    • /
    • 2014
  • In this paper, we proposed a method to automatically determine the optimal radius through multi-scale LBP operation generalizing the size of radius variation and boosting learning in facial emotion recognition. When we looked at the distribution of features vectors, the most common was $LBP_{8.1}$ of 31% and sum of $LBP_{8.1}$ and $LBP_{8.2}$ was 57.5%, $LBP_{8.3}$, $LBP_{8.4}$, and $LBP_{8.5}$ were respectively 18.5%, 12.0%, and 12.0%. It was found that the patterns of relatively greater radius express characteristics of face well. In case of normal and anger, $LBP_{8.1}$ and $LBP_{8.2}$ were mainly distributed. The distribution of $LBP_{8.3}$ is greater than or equal to the that of $LBP_{8.1}$ in laugh and surprise. It was found that the radius greater than 1 or 2 was useful for a specific emotion recognition. The facial expression recognition rate of proposed multi-scale LBP method was 97.5%. This showed the superiority of proposed method and it was confirmed through various experiments.