• Title/Summary/Keyword: Facial shape

Search Result 327, Processing Time 0.021 seconds

Implementation of Hair Style Recommendation System Based on Big data and Deepfakes (빅데이터와 딥페이크 기반의 헤어스타일 추천 시스템 구현)

  • Tae-Kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.13-19
    • /
    • 2023
  • In this paper, we investigated the implementation of a hairstyle recommendation system based on big data and deepfake technology. The proposed hairstyle recommendation system recognizes the facial shapes based on the user's photo (image). Facial shapes are classified into oval, round, and square shapes, and hairstyles that suit each facial shape are synthesized using deepfake technology and provided as videos. Hairstyles are recommended based on big data by applying the latest trends and styles that suit the facial shape. With the image segmentation map and the Motion Supervised Co-Part Segmentation algorithm, it is possible to synthesize elements between images belonging to the same category (such as hair, face, etc.). Next, the synthesized image with the hairstyle and a pre-defined video are applied to the Motion Representations for Articulated Animation algorithm to generate a video animation. The proposed system is expected to be used in various aspects of the beauty industry, including virtual fitting and other related areas. In future research, we plan to study the development of a smart mirror that recommends hairstyles and incorporates features such as Internet of Things (IoT) functionality.

Facial Region Tracking by Infra-red and CCD Color Image (CCD 컬러 영상과 적외선 영상을 이용한 얼굴 영역 검출)

  • Yoon, T.H.;Kim, K.S.;Han, M.H.;Shin, S.W.;Kim, I.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.60-62
    • /
    • 2005
  • In this study, the automatic tracking algorithm tracing a human face is proposed by using YCbCr color coordinated information and its thermal properties expressed in terms of thermal indexes in an infra-red image. The facial candidates are separately estimated in CbCr color and infra-red domain, respectively with applying the morphological image processing operations and the geometrical shape measures for fitting the elliptical features of a human face. The identification of a true face is accomplished by logical 'AND' operation between the refined image in CbCr color and infra-red domain.

  • PDF

Facial Impression Analysis Using SVM (SVM을 이용한 얼굴 인상 분석)

  • Jang, Kyung-Shik;Woo, Young-Woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.965-968
    • /
    • 2007
  • In this paper, we propose an efficient method to classify human facial impression using face image. The features that represent the shape of eye, jaw and face are used. The proposed method employs PCA, LDA and SVM in series. Human face has been classified for 8 facial impressions. The experiments have been performed for many face images, and show encouraging result.

  • PDF

Various Genioplasty techniques and case presentations (턱끝 성형술의 다양한 절골술식과 임상증례)

  • Choi, Jin Young
    • The Journal of the Korean dental association
    • /
    • v.58 no.2
    • /
    • pp.94-102
    • /
    • 2020
  • The form and location of chin is very important factor which determine the facial impression. Genioplasty is getting popular in order to improve the facial impression as facial beauty is considered as improvable factor. Through the geniplasty, chin can be moved to wanted location 3 dimensionally Genioplasty is relative simple but precise diagnosis and accurate surgical technique is very important for accurate and satisfying results. The form and shape of chin itself can be analysized and also must be evalulated in relation to the nose and lip and face. Author introduces the analysis of the chin, various surgical techniques of genioplasty and presents some cases.

  • PDF

Study on the Relationship Between 12Meridians Flow and Facial Expressions by Emotion (감정에 따른 얼굴 표정변화와 12경락(經絡) 흐름의 상관성 연구)

  • Park, Yu-Jin;Moon, Ju-Ho;Choi, Su-Jin;Shin, Seon-Mi;Kim, Ki-Tae;Ko, Heung
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.26 no.2
    • /
    • pp.253-258
    • /
    • 2012
  • Facial expression was an important communication methods. In oriental medicine, according to the emotion the face has changed shape and difference occurs in physiology and pathology. To verify such a theory, we studied the correlation between emotional facial expressions and meridian and collateral flow. The facial region divided by meridian, outer brow was Gallbladder meridian, inner brow was Bladder meridian, medial canthus was Bladder meridian, lateral canthus was Gallbladder meridian, upper eyelid was Bladder meridian, lower eyelid was Stomach meridian, central cheeks was Stomach meridian, lateral cheeks was Small intestine meridian, upper and lower lips, lip corner, chin were Small and Large intestine meridian. Meridian and collateral associated with happiness was six. This proves happiness is a high importance on facial expression. Meridian and collateral associated with anger was five. Meridian and Collateral associated with fear and sadness was four. This shows fear and sadness are a low importance on facial expression than different emotion. Based on yang meridian which originally descending flow in the body, the ratio of anterograde and retrograde were happiness 3:4, angry 2:5, sadness 5:3, fear 4:1. Based on face of the meridian flow, the ratio of anterograde and retrograde were happiness 5:2, angry 3:4, sadness 3:5, fear 4:1. We found out that practical meridian and collateral flow change by emotion does not correspond to the expected meridian and collateral flow change by emotion.

Relationship Between Morphologic measurement of Facial Feature and Eating Behavior During a Meal (얼굴생김새와 식사행동과의 관련성)

  • Kim, Gyeong-Eup;Kim, Seok-Young
    • Journal of the Korean Society of Food Culture
    • /
    • v.16 no.2
    • /
    • pp.109-117
    • /
    • 2001
  • Judging from the studies carried out by Dr. Jo, Yong Jin on the Koreans' faces, Koreans divided into two constitutions according to their facial features and heritages. The one population is the Northern lineage whose ancestor migrated from Siberia in ice age. In order to survive in cold climate, they have developed a high level of metabolic heat production. Cold adaptation for preventing heat loss results in a reduction in the facial surface area with small eyes, nose and lips. The other population is the Southern lineage who is the descent of native in Korean peninsular. They have big eyes with double edged eyelids, broad nose and thick lips. It is generally believed that both genetic and environmetal factors influence eating behaviors. Although we can't recognized their heritage that may contribute to the metabolism and eating behavior, we commonly recognize their physiological heritage acceding to their facial features. In order to investigate the relationship among the size and shape of facial feature, the eating behavior, anthropometric measurement in female college students, the eating behaviors was measured during an instant-noodle lunch eaten in a laboratory setting at the ambient temperature of $23^{\circ}C$. The anterior surface area of left eye and length of right eye were positively correlated with the difference between the peak postprandial and the meal-start core temperature. The surface area of lower lip also negatively correlated with the meal-start core temperature and meal duration. In addition, the total lips' area was positively correlated with the difference between the peak postprandial and the meal-start core temperature and negatively correlated with the meal duration. However anthropometric measurements were not related with the size of facial features.

  • PDF

Face classification and analysis based on geometrical feature of face (얼굴의 기하학적 특징정보 기반의 얼굴 특징자 분류 및 해석 시스템)

  • Jeong, Kwang-Min;Kim, Jung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1495-1504
    • /
    • 2012
  • This paper proposes an algorithm to classify and analyze facial features such as eyebrow, eye, mouth and chin based on the geometric features of the face. As a preprocessing process to classify and analyze the facial features, the algorithm extracts the facial features such as eyebrow, eye, nose, mouth and chin. From the extracted facial features, it detects the shape and form information and the ratio of distance between the features and formulated them to evaluation functions to classify 12 eyebrows types, 3 eyes types, 9 mouth types and 4 chine types. Using these facial features, it analyzes a face. The face analysis algorithm contains the information about pixel distribution and gradient of each feature. In other words, the algorithm analyzes a face by comparing such information about the features.

A Generation Methodology of Facial Expressions for Avatar Communications (아바타 통신에서의 얼굴 표정의 생성 방법)

  • Kim Jin-Yong;Yoo Jae-Hwi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.55-64
    • /
    • 2005
  • The avatar can be used as an auxiliary methodology of text and image communications in cyber space. An intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real or compressed pictures. In this paper. for supporting the action of arm and leg gestures, a method of generating the facial expressions that can represent sender's emotions is provided. The facial expression can be represented by Action Unit(AU), in this paper we suggest the methodology of finding appropriate AUs in avatar models that have various shape and structure. And, to maximize the efficiency of emotional expressions, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated.

  • PDF

An Error Examination of 3D Face Automatic Recognition (3차원 안면자동인식기의 형상복원 오차검사)

  • Suk, Jae-Hwa;Cho, Kyung-Rae;Cho, Yong-Beum;Yoo, Jung-Hee;Kwak, Chang-Kyu;Lee, Soo-Kyung;Kho, Byung-Hee;Kim, Jong-Won;Kim, Kyu-Kon;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.18 no.2
    • /
    • pp.41-49
    • /
    • 2006
  • 1. Objectives The Face is an important standard for the classification of Sasang Contitutions. We are developing 3D Face Automatic Recognition Apparatus to analyse the facial characteristics. So We should examine a shape demobilization error of 3D Face Automatic Recognition Apparatus. 2. Methods We compared facial shape data be demobilized by 3D Face Automatic Recognition Apparatus with facial shape data that be demobilized by 3D laser scanner. The subject was two korean men. And We analysed the average error and the maximum error of two data. In this process, We used one datum point(the peak of nose) and two datum line(vertical section and horizontal section). 3. Results and Conclusions In each this comparison, the average error of vertical section was 1.962574mm and 2.703814mm. and the maximum error of vertical section was 16.968249mm and 18.61464mm. the average error of horizontal section was 4.173203mm and 21.487479mm. and the maximum error of horizontal section was 3.571210mm and 17.13255mm. Also We complemented this apparatus a little and We reexamined a shape demobilization error of 3D Face Automatic Recognition Apparatus again. Accuracy of a shape demobilization was improved a little. From now on We complement accuracy of a shape demobilization in 3D Face Recognition Apparatus.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.