• Title/Summary/Keyword: face to face

Search Result 10,596, Processing Time 0.044 seconds

Face Recognition Based on PCA on Wavelet Subband of Average-Half-Face

  • Satone, M.P.;Kharate, G.K.
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.483-494
    • /
    • 2012
  • Many recent events, such as terrorist attacks, exposed defects in most sophisticated security systems. Therefore, it is necessary to improve security data systems based on the body or behavioral characteristics, often called biometrics. Together with the growing interest in the development of human and computer interface and biometric identification, human face recognition has become an active research area. Face recognition appears to offer several advantages over other biometric methods. Nowadays, Principal Component Analysis (PCA) has been widely adopted for the face recognition algorithm. Yet still, PCA has limitations such as poor discriminatory power and large computational load. This paper proposes a novel algorithm for face recognition using a mid band frequency component of partial information which is used for PCA representation. Because the human face has even symmetry, half of a face is sufficient for face recognition. This partial information saves storage and computation time. In comparison with the traditional use of PCA, the proposed method gives better recognition accuracy and discriminatory power. Furthermore, the proposed method reduces the computational load and storage significantly.

A real-time face tracking method using fuzzy controller (Fuzzy controller를 이용한 실시간 얼굴 추적하는 방법)

  • Sa, In-Kyu;Ahn, Ho-Seok;Lee, Hyung-Kyu;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.333-334
    • /
    • 2008
  • A real-time face tracking is a broad topic, covering a large spectrum of technologies and applications. Briefly face tracking is a kind of tracing technique which follows human face in any directions. It needs some algorithms such as human face detection and motion controller to track face. Moreover, both processing time and calculation time are the most important factors that influence to drive tracking system. In this paper, two algorithms are used to find human face: earn-shift algorithm and face detection algorithm using OpenCV. Fuzzy controller is utilized to move pan-tilt camera system which can move four directions along to x-y axis.

  • PDF

A Secure Face Cryptogr aphy for Identity Document Based on Distance Measures

  • Arshad, Nasim;Moon, Kwang-Seok;Kim, Jong-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.10
    • /
    • pp.1156-1162
    • /
    • 2013
  • Face verification has been widely studied during the past two decades. One of the challenges is the rising concern about the security and privacy of the template database. In this paper, we propose a secure face verification system which generates a unique secure cryptographic key from a face template. The face images are processed to produce face templates or codes to be utilized for the encryption and decryption tasks. The result identity data is encrypted using Advanced Encryption Standard (AES). Distance metric naming hamming distance and Euclidean distance are used for template matching identification process, where template matching is a process used in pattern recognition. The proposed system is tested on the ORL, YALEs, and PKNU face databases, which contain 360, 135, and 54 training images respectively. We employ Principle Component Analysis (PCA) to determine the most discriminating features among face images. The experimental results showed that the proposed distance measure was one the promising best measures with respect to different characteristics of the biometric systems. Using the proposed method we needed to extract fewer images in order to achieve 100% cumulative recognition than using any other tested distance measure.

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

A Study On Face Feature Points Using Active Discrete Wavelet Transform (Active Discrete Wavelet Transform를 이용한 얼굴 특징 점 추출)

  • Chun, Soon-Yong;Zijing, Qian;Ji, Un-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.1
    • /
    • pp.7-16
    • /
    • 2010
  • Face recognition of face images is an active subject in the area of computer pattern recognition, which has a wide range of potential. Automatic extraction of face image of the feature points is an important step during automatic face recognition. Whether correctly extract the facial feature has a direct influence to the face recognition. In this paper, a new method of facial feature extraction based on Discrete Wavelet Transform is proposed. Firstly, get the face image by using PC Camera. Secondly, decompose the face image using discrete wavelet transform. Finally, we use the horizontal direction, vertical direction projection method to extract the features of human face. According to the results of the features of human face, we can achieve face recognition. The result show that this method could extract feature points of human face quickly and accurately. This system not only can detect the face feature points with great accuracy, but also more robust than the tradition method to locate facial feature image.

Vector-based Face Generation using Montage and Shading Method (몽타주 기법과 음영합성 기법을 이용한 벡터기반 얼굴 생성)

  • 박연출;오해석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.817-828
    • /
    • 2004
  • In this paper, we propose vector-based face generation system that uses montage and shading method and preserves designer(artist)'s style. Proposed system generates character's face similar to human face automatically using facial features that extracted from a photograph. In addition, unlike previous face generation system that uses contours, we propose the system is based on color and composes face from facial features and shade extracted from a photograph. Thus, it has advantages that can make more realistic face similar to human face. Since this system is vector-based, the generated character's face has no size limit and constraint. Therefore it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, it has distinctiveness with another approaches in point that can keep artist's impression just as it is in result.

Speaker Detection and Recognition for a Welfare Robot

  • Sugisaka, Masanori;Fan, Xinjian
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.835-838
    • /
    • 2003
  • Computer vision and natural-language dialogue play an important role in friendly human-machine interfaces for service robots. In this paper we describe an integrated face detection and face recognition system for a welfare robot, which has also been combined with the robot's speech interface. Our approach to face detection is to combine neural network (NN) and genetic algorithm (GA): ANN serves as a face filter while GA is used to search the image efficiently. When the face is detected, embedded Hidden Markov Model (EMM) is used to determine its identity. A real-time system has been created by combining the face detection and recognition techniques. When motivated by the speaker's voice commands, it takes an image from the camera, finds the face inside the image and recognizes it. Experiments on an indoor environment with complex backgrounds showed that a recognition rate of more than 88% can be achieved.

  • PDF

3D Face Alignment and Normalization Based on Feature Detection Using Active Shape Models : Quantitative Analysis on Aligning Process (ASMs을 이용한 특징점 추출에 기반한 3D 얼굴데이터의 정렬 및 정규화 : 정렬 과정에 대한 정량적 분석)

  • Shin, Dong-Won;Park, Sang-Jun;Ko, Jae-Pil
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.403-411
    • /
    • 2008
  • The alignment of facial images is crucial for 2D face recognition. This is the same to facial meshes for 3D face recognition. Most of the 3D face recognition methods refer to 3D alignment but do not describe their approaches in details. In this paper, we focus on describing an automatic 3D alignment in viewpoint of quantitative analysis. This paper presents a framework of 3D face alignment and normalization based on feature points obtained by Active Shape Models (ASMs). The positions of eyes and mouth can give possibility of aligning the 3D face exactly in three-dimension space. The rotational transform on each axis is defined with respect to the reference position. In aligning process, the rotational transform converts an input 3D faces with large pose variations to the reference frontal view. The part of face is flopped from the aligned face using the sphere region centered at the nose tip of 3D face. The cropped face is shifted and brought into the frame with specified size for normalizing. Subsequently, the interpolation is carried to the face for sampling at equal interval and filling holes. The color interpolation is also carried at the same interval. The outputs are normalized 2D and 3D face which can be used for face recognition. Finally, we carry two sets of experiments to measure aligning errors and evaluate the performance of suggested process.

A study on the eye Location for Video-Conferencing Interface (화상 회의 인터페이스를 위한 눈 위치 검출에 관한 연구)

  • Jung, Jo-Nam;Gang, Jang-Mook;Bang, Kee-Chun
    • Journal of Digital Contents Society
    • /
    • v.7 no.1
    • /
    • pp.67-74
    • /
    • 2006
  • In current video-conferencing systems. user's face movements are restricted by fixed camera, therefore it is inconvenient to users. To solve this problem, tracking of face movements is needed. Tracking using whole face needs much computing time and whole face is difficult to define as an one feature. Thus, using several feature points in face is more desirable to track face movements efficiently. This paper addresses an effective eye location algorithm which is essential process of automatic human face tracking system for natural video-conferencing. The location of eye is very important information for face tracking, as eye has most clear and simplest attribute in face. The proposed algorithm is applied to candidate face regions from the face region extraction. It is not sensitive to lighting conditions and has no restriction on face size and face with glasses. The proposed algorithm shows very encouraging results from experiments on video-conferencing environments.

  • PDF

Performance Analysis of Face Recognition by Face Image resolutions using CNN without Backpropergation and LDA (역전파가 제거된 CNN과 LDA를 이용한 얼굴 영상 해상도별 얼굴 인식률 분석)

  • Moon, Hae-Min;Park, Jin-Won;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.24-29
    • /
    • 2016
  • To satisfy the needs of high-level intelligent surveillance system, it shall be able to extract objects and classify to identify precise information on the object. The representative method to identify one's identity is face recognition that is caused a change in the recognition rate according to environmental factors such as illumination, background and angle of camera. In this paper, we analyze the robust face recognition of face image by changing the distance through a variety of experiments. The experiment was conducted by real face images of 1m to 5m. The method of face recognition based on Linear Discriminant Analysis show the best performance in average 75.4% when a large number of face images per one person is used for training. However, face recognition based on Convolution Neural Network show the best performance in average 69.8% when the number of face images per one person is less than five. In addition, rate of low resolution face recognition decrease rapidly when the size of the face image is smaller than $15{\times}15$.