• Title/Summary/Keyword: Facial image

Search Result 834, Processing Time 0.035 seconds

Local Appearance-based Face Recognition Using SVM and PCA (SVM과 PCA를 이용한 국부 외형 기반 얼굴 인식 방법)

  • Park, Seung-Hwan;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.54-60
    • /
    • 2010
  • The local appearance-based method is one of the face recognition methods that divides face image into small areas and extracts features from each area of face image using statistical analysis. It collects classification results of each area and decides identity of a face image using a voting scheme by integrating classification results of each area of a face image. The conventional local appearance-based method divides face images into small pieces and uses all the pieces in recognition process. In this paper, we propose a local appearance-based method that makes use of only the relatively important facial components. The proposed method detects the facial components such as eyes, nose and mouth that differs much from person to person. In doing so, the proposed method detects exact locations of facial components using support vector machines (SVM). Based on the detected facial components, a number of small images that contain the facial parts are constructed. Then it extracts features from each facial component image using principal components analysis (PCA). We compared the performance of the proposed method to those of the conventional methods. The results show that the proposed method outperforms the conventional local appearance-based method while preserving the advantages of the conventional local appearance-based method.

Robust Real-time Face Detection Scheme on Various illumination Conditions (다양한 조명 환경에 강인한 실시간 얼굴확인 기법)

  • Kim, Soo-Hyun;Han, Young-Joon;Cha, Hyung-Tai;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.821-829
    • /
    • 2004
  • A face recognition has been used for verifying and authorizing valid users, but its applications have been restricted according to lighting conditions. In order to minimizing the restricted conditions, this paper proposes a new algorithm of detecting the face from the input image obtained under the irregular lighting condition. First, the proposed algorithm extracts an edge difference image from the input image where a skin color and a face contour are disappeared due to the background color or the lighting direction. In the next step, it extracts a face region using the histogram of the edge difference image and the intensity information. Using the intensity information, the face region is divided into the horizontal regions with feasible facial features. The each of horizontal regions is classified as three groups with the facial features(including eye, nose, and mouth) and the facial features are extracted using empirical properties of the facial features. Only when the facial features satisfy their topological rules, the face region is considered as a face. It has been proved by the experiments that the proposed algorithm can detect faces even when the large portion of face contour is lost due to the inadequate lighting condition or the image background color is similar to the skin color.

Development of Facial Palsy Grading System with Three Dimensional Image Processing (3차원 영상처리를 이용한 안면마비 평가시스템 개발)

  • Jang, M.;Shin, S.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.9 no.2
    • /
    • pp.129-135
    • /
    • 2015
  • The objective grading system for the facial palsy is needed. In this study, the facial palsy grading system was developed with combination of three dimensional image processing and Nottingham scale. The developed system is composed of 4 parts; measurement part, image processing part, computational part, facial palsy evaluation & display part. Two web cam were used to get images. The 8 marker on face were recognized at image processing part. The absolute three dimensional positions of markers were calculated at computational part. Finally, Nottingham scale was calculated and displayed at facial palsy evaluation & display part. The effects of measurement method and position of subject on Nottingham scale were tested. The markers were measured with 2-dimension and 3-dimension. The subject was look at the camera with $0^{\circ}$ and $11^{\circ}$ rotation. The change of Scale was large in the case of $11^{\circ}$ rotation with 2-dimension measurement. So, the developed system with 3-dimension measurement is robust to the orientation change of subject. The developed system showed the robustness of grading error originated from subject posture.

  • PDF

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Rapid Implementation of 3D Facial Reconstruction from a Single Image on an Android Mobile Device

  • Truong, Phuc Huu;Park, Chang-Woo;Lee, Minsik;Choi, Sang-Il;Ji, Sang-Hoon;Jeong, Gu-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.5
    • /
    • pp.1690-1710
    • /
    • 2014
  • In this paper, we propose the rapid implementation of a 3-dimensional (3D) facial reconstruction from a single frontal face image and introduce a design for its application on a mobile device. The proposed system can effectively reconstruct human faces in 3D using an approach robust to lighting conditions, and a fast method based on a Canonical Correlation Analysis (CCA) algorithm to estimate the depth. The reconstruction system is built by first creating 3D facial mapping from a personal identity vector of a face image. This mapping is then applied to real-world images captured with a built-in camera on a mobile device to form the corresponding 3D depth information. Finally, the facial texture from the face image is extracted and added to the reconstruction results. Experiments with an Android phone show that the implementation of this system as an Android application performs well. The advantage of the proposed method is an easy 3D reconstruction of almost all facial images captured in the real world with a fast computation. This has been clearly demonstrated in the Android application, which requires only a short time to reconstruct the 3D depth map.

Facial Recognition Algorithm Based on Edge Detection and Discrete Wavelet Transform

  • Chang, Min-Hyuk;Oh, Mi-Suk;Lim, Chun-Hwan;Ahmad, Muhammad-Bilal;Park, Jong-An
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.4
    • /
    • pp.283-288
    • /
    • 2001
  • In this paper, we proposed a method for extracting facial characteristics of human being in an image. Given a pair of gray level sample images taken with and without human being, the face of human being is segmented from the image. Noise in the input images is removed with the help of Gaussian filters. Edge maps are found of the two input images. The binary edge differential image is obtained from the difference of the two input edge maps. A mask for face detection is made from the process of erosion followed by dilation on the resulting binary edge differential image. This mask is used to extract the human being from the two input image sequences. Features of face are extracted from the segmented image. An effective recognition system using the discrete wave let transform (DWT) is used for recognition. For extracting the facial features, such as eyebrows, eyes, nose and mouth, edge detector is applied on the segmented face image. The area of eye and the center of face are found from horizontal and vertical components of the edge map of the segmented image. other facial features are obtained from edge information of the image. The characteristic vectors are extrated from DWT of the segmented face image. These characteristic vectors are normalized between +1 and -1, and are used as input vectors for the neural network. Simulation results show recognition rate of 100% on the learned system, and about 92% on the test images.

  • PDF

Facial Image Synthesis by Controlling Skin Microelements (피부 미세요소 조절을 통한 얼굴 영상 합성)

  • Kim, Yujin;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.369-377
    • /
    • 2022
  • Recent deep learning-based face synthesis research shows the result of generating a realistic face including overall style or elements such as hair, glasses, and makeup. However, previous methods cannot create a face at a very detailed level, such as the microstructure of the skin. In this paper, to overcome this limitation, we propose a technique for synthesizing a more realistic facial image from a single face label image by controlling the types and intensity of skin microelements. The proposed technique uses Pix2PixHD, an Image-to-Image Translation method, to convert a label image showing the facial region and skin elements such as wrinkles, pores, and redness to create a facial image with added microelements. Experimental results show that it is possible to create various realistic face images reflecting fine skin elements corresponding to this by generating various label images with adjusted skin element regions.

SKELETAL PATTERN ANALYSIS OF FACIAL ASYMMETRY PATIENT USING THREE DIMENSIONAL COMPUTED TOMOGRAPHY (삼차원 전산화 단층촬영술을 이용한 안모 비대칭환자의 골격 분석)

  • Choi, Jung-Goo;Min, Seung-Ki;Oh, Seung-Hwan;Kwon, Kyung-Hwan;Choi, Moon-Ki;Lee, June;Oh, Se-Ri;Yu, Dae-Hyun
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.34 no.6
    • /
    • pp.622-627
    • /
    • 2008
  • In orthognathic surgery, precise analysis and diagnosis are essential for successful results. In facial asymmetric patient, traditional 2D image analysis has been used by lateral and P-A Cephalometric view, Skull PA, Panorama, Submentovertex view etc. But clinicians sometimes misdiagnose because they cannot find exact landmark due to superimposition, moreover image can be magnified and distorted by projection technique or patient's skull position, when using these analysis and method. For overcome these defects, analysis by using of 3D CT has been introduced. In this way we can analysis precisely by getting the exact image free of artifact and finding exact landmark with no interruption of superimposition. So we want to review of relationship between various skeletal landmarks of mandible or cranial base and facial asymmetry by predictable analysis using 3D CT. We select the cases of the patients who visited our department for correction of facial asymmetry during 2003-2007 and who were taken image of 3D CT for diagnosis. 3D CT images were reconstructed to 3D image by using V-Work program (Cybermed Inc., Seoul, Korea). And we analysis the relationship between facial asymmetry and various affecting factor of skeletal pattern. The mandibular ramus hight difference between right and left was most affecting factor that express facial asymmetry. And in this research, there was no relationship between cranial base and facial asymmetry. The angulation between facial midline and mandibular ramus divergency has significant relationship with facial asymmetry

The Effects of Image Making According to Somatotypes and Face Types

  • Choi, Mee-Sung;Kim, Sung-Hee
    • Journal of Fashion Business
    • /
    • v.10 no.3
    • /
    • pp.44-53
    • /
    • 2006
  • The purposes of this study were to find out any significants among somatotypes and face types and importance of image making to successful students' life. The respondents were composed of 181 males and 160 females. It consists of total 29 items including the facial features, personality expression methods, characteristics of body shapes, image making, colors and neckline and their responses were measured with Likert 5-point scale. For data analysis, descriptive statistics, cross-tabulation analysis including $Χ^2$-test and frequency analysis were used. As the results, 47% of male students and 28% of female students responded they were satisfied with their own facial types. 32% of male students and all female students were not satisfied with their own body shapes and fashion depends on accessories like hat, sunglasses, boots and necklace rather than dress itself. All male and female students were not satisfied with their body shapes and recognized the importance of image. They answered that they change image if someone advices their image and it suggests that information and intellectual needs of image making are required and approach to efficient methods of image making is needed.

Markerless Image-to-Patient Registration Using Stereo Vision : Comparison of Registration Accuracy by Feature Selection Method and Location of Stereo Bision System (스테레오 비전을 이용한 마커리스 정합 : 특징점 추출 방법과 스테레오 비전의 위치에 따른 정합 정확도 평가)

  • Joo, Subin;Mun, Joung-Hwan;Shin, Ki-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.1
    • /
    • pp.118-125
    • /
    • 2016
  • This study evaluates the performance of image to patient registration algorithm by using stereo vision and CT image for facial region surgical navigation. For the process of image to patient registration, feature extraction and 3D coordinate calculation are conducted, and then 3D CT image to 3D coordinate registration is conducted. Of the five combinations that can be generated by using three facial feature extraction methods and three registration methods on stereo vision image, this study evaluates the one with the highest registration accuracy. In addition, image to patient registration accuracy was compared by changing the facial rotation angle. As a result of the experiment, it turned out that when the facial rotation angle is within 20 degrees, registration using Active Appearance Model and Pseudo Inverse Matching has the highest accuracy, and when the facial rotation angle is over 20 degrees, registration using Speeded Up Robust Features and Iterative Closest Point has the highest accuracy. These results indicate that, Active Appearance Model and Pseudo Inverse Matching methods should be used in order to reduce registration error when the facial rotation angle is within 20 degrees, and Speeded Up Robust Features and Iterative Closest Point methods should be used when the facial rotation angle is over 20 degrees.