• 제목/요약/키워드: Color facial Image

Search Result 161, Processing Time 0.024 seconds

Face recognition using Wavelets and Fuzzy C-Means clustering (웨이블렛과 퍼지 C-Means 클러스터링을 이용한 얼굴 인식)

  • 윤창용;박정호;박민용
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.583-586
    • /
    • 1999
  • In this paper, the wavelet transform is performed in the input 256$\times$256 color image and decomposes a image into low-pass and high-pass components. Since the high-pass band contains the components of three directions, edges are detected by combining three parts. After finding the position of face using the histogram of the edge component, a face region in low-pass band is cut off. Since RGB color image is sensitively affected by luminances, the image of low pass component is normalized, and a facial region is detected using face color informations. As the wavelet transform decomposes the detected face region into three layer, the dimension of input image is reduced. In this paper, we use the 3000 images of 10 persons, and KL transform is applied in order to classify face vectors effectively. FCM(Fuzzy C-Means) algorithm classifies face vectors with similar features into the same cluster. In this case, the number of cluster is equal to that of person, and the mean vector of each cluster is used as a codebook. We verify the system performance of the proposed algorithm by the experiments. The recognition rates of learning images and testing image is computed using correlation coefficient and Euclidean distance.

  • PDF

A Study on Face Image Recognition Using Feature Vectors (특징벡터를 사용한 얼굴 영상 인식 연구)

  • Kim Jin-Sook;Kang Jin-Sook;Cha Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.4
    • /
    • pp.897-904
    • /
    • 2005
  • Face Recognition has been an active research area because it is not difficult to acquire face image data and it is applicable in wide range area in real world. Due to the high dimensionality of a face image space, however, it is not easy to process the face images. In this paper, we propose a method to reduce the dimension of the facial data and extract the features from them. It will be solved using the method which extracts the features from holistic face images. The proposed algorithm consists of two parts. The first is the using of principal component analysis (PCA) to transform three dimensional color facial images to one dimensional gray facial images. The second is integrated linear discriminant analusis (PCA+LDA) to prevent the loss of informations in case of performing separated steps. Integrated LDA is integrated algorithm of PCA for reduction of dimension and LDA for discrimination of facial vectors. First, in case of transformation from color image to gray image, PCA(Principal Component Analysis) is performed to enhance the image contrast to raise the recognition rate. Second, integrated LDA(Linear Discriminant Analysis) combines the two steps, namely PCA for dimensionality reduction and LDA for discrimination. It makes possible to describe concise algorithm expression and to prevent the information loss in separate steps. To validate the proposed method, the algorithm is implemented and tested on well controlled face databases.

Side-View Fan Detection Using Both the Location of Nose and Chin and the Color of Image (코와 턱의 위치 및 색상을 이용한 측면 얼굴 검출)

  • 송영준;장언동;박원배;서형석
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.4
    • /
    • pp.17-22
    • /
    • 2003
  • In this paper, we propose the new side-view face detection method in color images which contain faces over one. It uses color and the geometrical distance between nose and chin. We convert RGB to YCbCr color space. We extract candidate regions of face using skin color information from image. And then, the extracted regions are processed by morphological filter, and the processed regions are labeled. Also, we correct the gradient of inclined face image using projected character of nose. And we detect the inclined side-view faces that have right and left 45 tips by within via ordinate. And we get 92% detection rate in 100 test images.

  • PDF

Harris Corner Detection for Eyes Detection in Facial Images

  • Navastara, Dini Adni;Koo, Kyung-Mo;Park, Hyun-Jun;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.373-376
    • /
    • 2013
  • Nowadays, eyes detection is required and considered as the most important step in several applications, such as eye tracking, face identification and recognition, facial expression analysis and iris detection. This paper presents the eyes detection in facial images using Harris corner detection. Firstly, Haar-like features for face detection is used to detect a face region in an image. To separate the region of the eyes from a whole face region, the projection function is applied in this paper. At the last step, Harris corner detection is used to detect the eyes location. In experimental results, the eyes location on both grayscale and color facial images were detected accurately and effectively.

  • PDF

A Cross-Cultural Study of Facial Awareness, Influential Factors, and Attractiveness Preferences Among Korean, Japanese, and Chinese Men and Women Evaluating Korean Women by Facial Type (한국여성의 얼굴이미지 유형별 인식영향요소와 매력선호도에 대한 한중일 남녀 비교)

  • Baek, Kyoung-Jin;Kim, Young-In
    • Journal of the Korean Society of Costume
    • /
    • v.65 no.3
    • /
    • pp.1-14
    • /
    • 2015
  • The purpose of this study is to identify cross-cultural features among Korea, China, and Japan by comparing differences in facial awareness, attractiveness preferences, and consideration of facial parts in a group of Korean, Chinese, and Japanese men and women as they evaluated the faces of Korean women in their 20s. A survey was conducted targeting male and female Korean, Chinese, and Japanese college students in their 20s. Frequency analysis, ANOVA, Duncan test, factorial analysis, and reliability analysis, MANOVA were carried out using SPSS 18.0. The results of this study are as follows: Faces of Korean women in their 20s were evaluated by Korean, Chinese, and Japanese men and women in their 20s and were classified into four categories as 'Youthfulness', 'Classiness', 'Friendliness' and 'Activeness'. Differences in facial image awareness were observed depending on nationality and gender. Korean participants were found to place importance on overall morphological factors; The Japanese focused on the eyes; and the Chinese on the skin color. Women of all nationalities showed, on average, a higher awareness of facial parts than men. No significant differences in facial attractiveness preferences were found based on nationality or gender, but there were differences in how the participants evaluated faces for attractiveness, showing that reasons for preferences may vary even if the preferences are the same.

Back-Propagation Neural Network Based Face Detection and Pose Estimation (오류-역전파 신경망 기반의 얼굴 검출 및 포즈 추정)

  • Lee, Jae-Hoon;Jun, In-Ja;Lee, Jung-Hoon;Rhee, Phill-Kyu
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.853-862
    • /
    • 2002
  • Face Detection can be defined as follows : Given a digitalized arbitrary or image sequence, the goal of face detection is to determine whether or not there is any human face in the image, and if present, return its location, direction, size, and so on. This technique is based on many applications such face recognition facial expression, head gesture and so on, and is one of important qualify factors. But face in an given image is considerably difficult because facial expression, pose, facial size, light conditions and so on change the overall appearance of faces, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact face detection which overcomes some restrictions by using neural network. The proposed system can be face detection irrelevant to facial expression, background and pose rapidily. For this. face detection is performed by neural network and detection response time is shortened by reducing search region and decreasing calculation time of neural network. Reduced search region is accomplished by using skin color segment and frame difference. And neural network calculation time is decreased by reducing input vector sire of neural network. Principle Component Analysis (PCA) can reduce the dimension of data. Also, pose estimates in extracted facial image and eye region is located. This result enables to us more informations about face. The experiment measured success rate and process time using the Squared Mahalanobis distance. Both of still images and sequence images was experimented and in case of skin color segment, the result shows different success rate whether or not camera setting. Pose estimation experiments was carried out under same conditions and existence or nonexistence glasses shows different result in eye region detection. The experiment results show satisfactory detection rate and process time for real time system.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

The Face Color Analysis According to the Kidney Foot Acupressure Stimulation (신장 발 지압 자극에 따른 얼굴 색상 분석)

  • Kim, Bong-Hyun;Cho, Dong-Uk;Han, Kil-Sung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.1
    • /
    • pp.133-138
    • /
    • 2012
  • Human body such as hands, foots and face are related with five organs. Particularly, foots are called 'second cardiac'. In this paper, we should like to analyze changes of facial color according to stimulation kidney associated foot acupressure point. To this end, we collected facial image of before and after of kidney associated foot acupressure point to normal kidney 20s male in 10 then we measured K of CMYK color system with L of Lab color system in JIGAK area associated kidney of facial area. As a result of us experiment, after stimulation of kidney associated foot acupressure point, L is increased and K is decreased in 90% of subjects. Finally, the effectiveness of this paper is demonstrated with several experiments.

Automatic Denoising of 2D Color Face Images Using Recursive PCA Reconstruction (2차원 칼라 얼굴 영상에서 반복적인 PCA 재구성을 이용한 자동적인 잡음 제거)

  • Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.63-71
    • /
    • 2006
  • Denoising and reconstruction of color images are extensively studied in the field of computer vision and image processing. Especially, denoising and reconstruction of color face images are more difficult than those of natural images because of the structural characteristics of human faces as well as the subtleties of color interactions. In this paper, we propose a denoising method based on PCA reconstruction for removing complex color noise on human faces, which is not easy to remove by using vectorial color filters. The proposed method is composed of the following five steps: training of canonical eigenface space using PCA, automatic extraction of facial features using active appearance model, relishing of reconstructed color image using bilateral filter, extraction of noise regions using the variance of training data, and reconstruction using partial information of input images (except the noise regions) and blending of the reconstructed image with the original image. Experimental results show that the proposed denoising method maintains the structural characteristics of input faces, while efficiently removing complex color noise.

Face and Iris Detection Algorithm based on SURF and circular Hough Transform (서프 및 하프변환 기반 운전자 동공 검출기법)

  • Artem, Lenskiy;Lee, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.175-182
    • /
    • 2010
  • The paper presents a novel algorithm for face and iris detection with the application for driver iris monitoring. The proposed algorithm consists of the following major steps: Skin-color segmentation, facial features segmentation, and iris positioning. For the skin-segmentation we applied a multi-layer perceptron to approximate the statistical probability of certain skin-colors, and filter out those with low probabilities. The next step segments the face region into the following categories: eye, mouth, eye brow, and remaining facial regions. For this purpose we propose a novel segmentation technique based on estimation of facial class probability density functions (PDF). Each facial class PDF is estimated on the basis of salient features extracted from a corresponding facial image region. Then pixels are classified according to the highest probability selected from four estimated PDFs. The final step applies the circular Hough transform to the detected eye regions to extract the position and radius of the iris. We tested our system on two data sets. The first one is obtained from the Web and contains faces under different illuminations. The second dataset was collected by us. It contains images obtained from video sequences recorded by a CCD camera while a driver was driving a car. The experimental results are presented, showing high detection rates.