• Title/Summary/Keyword: facial image

Search Result 834, Processing Time 0.033 seconds

Automatic Face Extraction with Unification of Brightness Distribution in Candidate Region and Triangle Structure among Facial Features (후보영역의 밝기 분산과 얼굴특징의 삼각형 배치구조를 결합한 얼굴의 자동 검출)

  • 이칠우;최정주
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.1
    • /
    • pp.23-33
    • /
    • 2000
  • In this paper, we describe an algorithm which can extract human faces with natural pose from complex backgrounds. This method basically adopts the concept that facial region has the nearly same gray level for all pixels within appropriately scaled blocks. Based on the idea, we develop a hierarchial process that first, a block image data with pyramid structure of input image is generated, and some candidate regions for facial regions in the block image are Quickly determined, then finally the detailed facial features; organs are decided. To find the features easily, we introduce a local gray level transform which emphasizes dark and small regions, and estimate the geometrical triangle constraints among the facial features. The merit of our method is that we can be freed from the parameter assignment problem since the algorithm utilize a simple brightness computation, consequently robust systems not being depended on specific parameter values can be easily constructed.

  • PDF

Facial Contour Extraction in PC Camera Images using Active Contour Models (동적 윤곽선 모델을 이용한 PC 카메라 영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.633-638
    • /
    • 2005
  • The extraction of a face is a very important part for human interface, biometrics and security. In this paper, we applies DCM(Dilation of Color and Motion) filter and Active Contour Models to extract facial outline. First, DCM filter is made by applying morphology dilation to the combination of facial color image and differential image applied by dilation previously. This filter is used to remove complex background and to detect facial outline. Because Active Contour Models receive a large effect according to initial curves, we calculate rotational degree using geometric ratio of face, eyes and mouth. We use edgeness and intensity as an image energy, in order to extract outline in the area of weak edge. We acquire various head-pose images with both eyes from five persons in inner space with complex background. As an experimental result with total 125 images gathered by 25 per person, it shows that average extraction rate of facial outline is 98.1% and average processing time is 0.2sec.

  • PDF

Facial Expression Recognition with Instance-based Learning Based on Regional-Variation Characteristics Using Models-based Feature Extraction (모델기반 특징추출을 이용한 지역변화 특성에 따른 개체기반 표정인식)

  • Park, Mi-Ae;Ko, Jae-Pil
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1465-1473
    • /
    • 2006
  • In this paper, we present an approach for facial expression recognition using Active Shape Models(ASM) and a state-based model in image sequences. Given an image frame, we use ASM to obtain the shape parameter vector of the model while we locate facial feature points. Then, we can obtain the shape parameter vector set for all the frames of an image sequence. This vector set is converted into a state vector which is one of the three states by the state-based model. In the classification step, we use the k-NN with the proposed similarity measure that is motivated on the observation that the variation-regions of an expression sequence are different from those of other expression sequences. In the experiment with the public database KCFD, we demonstrate that the proposed measure slightly outperforms the binary measure in which the recognition performance of the k-NN with the proposed measure and the existing binary measure show 89.1% and 86.2% respectively when k is 1.

  • PDF

Multi-attribute Face Editing using Facial Masks (얼굴 마스크 정보를 활용한 다중 속성 얼굴 편집)

  • Ambardi, Laudwika;Park, In Kyu;Hong, Sungeun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2022
  • Although face recognition and face generation have been growing in popularity, the privacy issues of using facial images in the wild have been a concurrent topic. In this paper, we propose a face editing network that can reduce privacy issues by generating face images with various properties from a small number of real face images and facial mask information. Unlike the existing methods of learning face attributes using a lot of real face images, the proposed method generates new facial images using a facial segmentation mask and texture images from five parts as styles. The images are then trained with our network to learn the styles and locations of each reference image. Once the proposed framework is trained, we can generate various face images using only a small number of real face images and segmentation information. In our extensive experiments, we show that the proposed method can not only generate new faces, but also localize facial attribute editing, despite using very few real face images.

A study on age estimation of facial images using various CNNs (Convolutional Neural Networks) (다양한 CNN 모델을 이용한 얼굴 영상의 나이 인식 연구)

  • Sung Eun Choi
    • Journal of Platform Technology
    • /
    • v.11 no.5
    • /
    • pp.16-22
    • /
    • 2023
  • There is a growing interest in facial age estimation because many applications require age estimation techniques from facial images. In order to estimate the exact age of a face, a technique for extracting aging features from a face image and classifying the age according to the extracted features is required. Recently, the performance of various CNN-based deep learning models has been greatly improved in the image recognition field, and various CNN-based deep learning models are being used to improve performance in the field of facial age estimation. In this paper, age estimation performance was compared by learning facial features based on various CNN-based models such as AlexNet, VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-101, ResNet-152. As a result of experiment, it was confirmed that the performance of the facial age estimation models using ResNet-34 was the best.

  • PDF

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.

A Study on the Face Image to Shape Differences and Make up (얼굴의 형태적 특성과 메이크업에 의한 얼굴 이미지 연구)

  • Song, Mi-Young;Park, Oak-Reon;Lee, Young-Ju
    • Korean Journal of Human Ecology
    • /
    • v.14 no.1
    • /
    • pp.143-153
    • /
    • 2005
  • The purpose of this research is to study face images according to the difference of facial shape and make-up. A variety of face images can be formulated by computer graphic simulation, combining numerously different facial shapes and make-up styles. In order to check out the diverse images by make-up styles, we applied five forms of eye brows, two types of eye shadows, and three lip shapes to the round-shaped face of a model. The question sheet, used with a operational stimulant in the experiment, contained 28 articles, composed of a pair of bi-ended adjective in 7 point scale. Data were analyzed using Varimax perpendicular rotation method, Duncan's Multiple Range Test, and Three-way ANOVA. After comparing various results of make-up application to various face types, we could find that facial shape, eye-brows, eye-shadow, and lip shapes influence interactively on total facial images. As a result of make-up image perception analyses, a factor structure was divided into mildness, modernness, elegance, and sociableness. Speaking of make-up image in terms of those factors, round form make-up style showed the highest level of mildness. Upward and straight style of make-up had the highest of modernness. Elegance level went highest when eye shadow style was round form and lip style was straight. Lastly, an incurve lip make-up style showed the highest of sociableness.

  • PDF

Face Detection and Recognition with Multiple Appearance Models for Mobile Robot Application

  • Lee, Taigun;Park, Sung-Kee;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.4-100
    • /
    • 2002
  • For visual navigation, mobile robot can use a stereo camera which has large field of view. In this paper, we propose an algorithm to detect and recognize human face on the basis of such camera system. In this paper, a new coarse to fine detection algorithm is proposed. For coarse detection, nearly face-like areas are found in entire image using dual ellipse templates. And, detailed alignment of facial outline and features is performed on the basis of view- based multiple appearance model. Because it hard to finely align with facial features in this case, we try to find most resembled face image area is selected from multiple face appearances using most distinguished facial features- two eye...

  • PDF

Homogeneous and Non-homogeneous Polynomial Based Eigenspaces to Extract the Features on Facial Images

  • Muntasa, Arif
    • Journal of Information Processing Systems
    • /
    • v.12 no.4
    • /
    • pp.591-611
    • /
    • 2016
  • High dimensional space is the biggest problem when classification process is carried out, because it takes longer time for computation, so that the costs involved are also expensive. In this research, the facial space generated from homogeneous and non-homogeneous polynomial was proposed to extract the facial image features. The homogeneous and non-homogeneous polynomial-based eigenspaces are the second opinion of the feature extraction of an appearance method to solve non-linear features. The kernel trick has been used to complete the matrix computation on the homogeneous and non-homogeneous polynomial. The weight and projection of the new feature space of the proposed method have been evaluated by using the three face image databases, i.e., the YALE, the ORL, and the UoB. The experimental results have produced the highest recognition rate 94.44%, 97.5%, and 94% for the YALE, ORL, and UoB, respectively. The results explain that the proposed method has produced the higher recognition than the other methods, such as the Eigenface, Fisherface, Laplacianfaces, and O-Laplacianfaces.

Skin Condition Analysis of Facial Image using Smart Device: Based on Acne, Pigmentation, Flush and Blemish

  • Park, Ki-Hong;Kim, Yoon-Ho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.47-58
    • /
    • 2018
  • In this paper, we propose a method for skin condition analysis using a camera module embedded in a smartphone without a separate skin diagnosis device. The type of skin disease detected in facial image taken by smartphone is acne, pigmentation, blemish and flush. Face features and regions were detected using Haar features, and skin regions were detected using YCbCr and HSV color models. Acne and flush were extracted by setting the range of a component image hue, and pigmentation was calculated by calculating the factor between the minimum and maximum value of the corresponding skin pixel in the component image R. Blemish was detected on the basis of adaptive thresholds in gray scale level images. As a result of the experiment, the proposed skin condition analysis showed that skin diseases of acne, pigmentation, blemish and flush were effectively detected.