• Title/Summary/Keyword: Face image

Search Result 1,613, Processing Time 0.029 seconds

Facial Recognition Algorithm Based on Edge Detection and Discrete Wavelet Transform

  • Chang, Min-Hyuk;Oh, Mi-Suk;Lim, Chun-Hwan;Ahmad, Muhammad-Bilal;Park, Jong-An
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.4
    • /
    • pp.283-288
    • /
    • 2001
  • In this paper, we proposed a method for extracting facial characteristics of human being in an image. Given a pair of gray level sample images taken with and without human being, the face of human being is segmented from the image. Noise in the input images is removed with the help of Gaussian filters. Edge maps are found of the two input images. The binary edge differential image is obtained from the difference of the two input edge maps. A mask for face detection is made from the process of erosion followed by dilation on the resulting binary edge differential image. This mask is used to extract the human being from the two input image sequences. Features of face are extracted from the segmented image. An effective recognition system using the discrete wave let transform (DWT) is used for recognition. For extracting the facial features, such as eyebrows, eyes, nose and mouth, edge detector is applied on the segmented face image. The area of eye and the center of face are found from horizontal and vertical components of the edge map of the segmented image. other facial features are obtained from edge information of the image. The characteristic vectors are extrated from DWT of the segmented face image. These characteristic vectors are normalized between +1 and -1, and are used as input vectors for the neural network. Simulation results show recognition rate of 100% on the learned system, and about 92% on the test images.

  • PDF

A Face Detection using Pupil-Template from Color Base Image (컬러 기반 영상에서 눈동자 템플릿을 이용한 얼굴영상 추출)

  • Choi, Ji-Young;Kim, Mi-Kyung;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.828-831
    • /
    • 2005
  • In this paper we propose a method to detect human faces from color image using pupil-template matching. Face detection is done by three stages. (i)separating skin regions from non-skin regions; (ii)generating a face regions by application of the best-fit ellipse; (iii)detecting face by pupil-template. Detecting skin regions is based on a skin color model. we generate a gray scale image from original image by the skin model. The gray scale image is segmented to separated skin regions from non-skin regions. Face region is generated by application of the best-fit ellipse is computed on the base of moments. Generated face regions are matched by pupil-template. And we detection face.

  • PDF

A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image (일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.251-260
    • /
    • 2016
  • Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.

Far Distance Face Detection from The Interest Areas Expansion based on User Eye-tracking Information (시선 응시 점 기반의 관심영역 확장을 통한 원 거리 얼굴 검출)

  • Park, Heesun;Hong, Jangpyo;Kim, Sangyeol;Jang, Young-Min;Kim, Cheol-Su;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.113-127
    • /
    • 2012
  • Face detection methods using image processing have been proposed in many different ways. Generally, the most widely used method for face detection is an Adaboost that is proposed by Viola and Jones. This method uses Haar-like feature for image learning, and the detection performance depends on the learned images. It is well performed to detect face images within a certain distance range, but if the image is far away from the camera, face images become so small that may not detect them with the pre-learned Haar-like feature of the face image. In this paper, we propose the far distance face detection method that combine the Aadaboost of Viola-Jones with a saliency map and user's attention information. Saliency Map is used to select the candidate face images in the input image, face images are finally detected among the candidated regions using the Adaboost with Haar-like feature learned in advance. And the user's eye-tracking information is used to select the interest regions. When a subject is so far away from the camera that it is difficult to detect the face image, we expand the small eye gaze spot region using linear interpolation method and reuse that as input image and can increase the face image detection performance. We confirmed the proposed model has better results than the conventional Adaboost in terms of face image detection performance and computational time.

Speaker Detection and Recognition for a Welfare Robot

  • Sugisaka, Masanori;Fan, Xinjian
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.835-838
    • /
    • 2003
  • Computer vision and natural-language dialogue play an important role in friendly human-machine interfaces for service robots. In this paper we describe an integrated face detection and face recognition system for a welfare robot, which has also been combined with the robot's speech interface. Our approach to face detection is to combine neural network (NN) and genetic algorithm (GA): ANN serves as a face filter while GA is used to search the image efficiently. When the face is detected, embedded Hidden Markov Model (EMM) is used to determine its identity. A real-time system has been created by combining the face detection and recognition techniques. When motivated by the speaker's voice commands, it takes an image from the camera, finds the face inside the image and recognizes it. Experiments on an indoor environment with complex backgrounds showed that a recognition rate of more than 88% can be achieved.

  • PDF

Algorithm of Face Region Detection in the TV Color Background Image (TV컬러 배경영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.672-679
    • /
    • 2011
  • In this paper, detection algorithm of face region based on skin color of in the TV images is proposed. In the first, reference image is set to the sampled skin color, and then the extracted of face region is candidated using the Euclidean distance between the pixels of TV image. The eye image is detected by using the mean value and standard deviation of the component forming color difference between Y and C through the conversion of RGB color into CMY color model. Detecting the lips image is calculated by utilizing Q component through the conversion of RGB color model into YIQ color space. The detection of the face region is extracted using basis of knowledge by doing logical calculation of the eye image and lips image. To testify the proposed method, some experiments are performed using front color image down loaded from TV color image. Experimental results showed that face region can be detected in both case of the irrespective location & size of the human face.

The Factor and Analysis on the Face Image to Hairstyle Variation - using by Computer Graphic Simulation- (Hairstyle 변화에 의한 얼굴 이미지 요인과 분석 -Computer Graphic simulation을 이용하여 -)

  • Do Ju Yeun;Kown Young Suk
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.16 no.3 s.43
    • /
    • pp.243-250
    • /
    • 1992
  • The purposes of this research were to analyze factor structure and the face image to hairstyles which was made by Computer graphic simulation. To select ten hairstyle, a standard face selected between women of $20\~25$ years, and four factor (straight, curl, hair length, front hair, part hair) which were made of stand of hairstyle were applied. The results were as follows; 1 face image factor by hairstyle variation were found to five factor; negative and pogitive, indivisuality, youthfulness, unbanity, intelligence. 2. The result of analysis to face image by hairstyle factor were (1) In the hair state, straight hair was explained by the youthful, pure, decent image than curl. (2) In the hair length, the longer hair was explained by the more feminine, softness image. The shorter hair was explained by the more vigorous, youthful image. (3) In the presence of front hair, bang hair was explained by the commonness, moderate, classical image than all back hair (4) In the part hair, part hair was explained by the modern and unbanity image than no part hair.

  • PDF

Single Image-Based 3D Face Modeling for 3D Printing (3D 프린팅을 위한 단일 영상 기반 3D 얼굴 모델링 연구)

  • Song, Eungyeol;Koh, Wan-Ki;Yu, Sunjin
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.8
    • /
    • pp.571-576
    • /
    • 2016
  • 3D printing has recently been used in various fields. Among various applications, 3D face data must be generated for 3D face printing. A laser scanner is used to acquire 3D face data, but there is a restriction that a person should not move during scanning. In this paper, we propose a 3D face modeling method based on a single image and a face transformation system to use the generated 3D face for virtual cosmetic surgery. We have defined facial feature points from the 3D face database for 3D face data generation. After extracting feature points from a single face image, 3D face of the input face image is generated corresponding to the 3D face feature points defined from the 3D face database. After 3D face modeling, 3D face modification part is applied for use such as virtual cosmetic surgery.

The Long Distance Face Recognition using Multiple Distance Face Images Acquired from a Zoom Camera (줌 카메라를 통해 획득된 거리별 얼굴 영상을 이용한 원거리 얼굴 인식 기술)

  • Moon, Hae-Min;Pan, Sung Bum
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.6
    • /
    • pp.1139-1145
    • /
    • 2014
  • User recognition technology, which identifies or verifies a certain individual is absolutely essential under robotic environments for intelligent services. The conventional face recognition algorithm using single distance face image as training images has a problem that face recognition rate decreases as distance increases. The face recognition algorithm using face images by actual distance as training images shows good performance but this has a problem that it requires user cooperation. This paper proposes the LDA-based long distance face recognition method which uses multiple distance face images from a zoom camera for training face images. The proposed face recognition technique generated better performance by average 7.8% than the technique using the existing single distance face image as training. Compared with the technique that used face images by distance as training, the performance fell average 8.0%. However, the proposed method has a strength that it spends less time and requires less cooperation to users when taking face images.

LVQ network for a face image recognition of the 3D (3D 얼굴 영상 인식을 위한 LVQ 네트워크)

  • 김영렬;박진성;임성진;이용구;엄기환
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.151-154
    • /
    • 2003
  • In this paper, we propose a method to recognize a face image of the 3D using the LVQ network. LVQ network of the proposed method, We used the front view of a face image to get to a coded light to a training data, can group a face image including the side of various angle. For an usefulness authentication of this algorithm, Various experiment which classifies a face image of the angle was the low.

  • PDF