• Title/Summary/Keyword: Color facial Image

Search Result 161, Processing Time 0.024 seconds

Face Tracking System Using Updated Skin Color (업데이트된 피부색을 이용한 얼굴 추적 시스템)

  • Ahn, Kyung-Hee;Kim, Jong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.5
    • /
    • pp.610-619
    • /
    • 2015
  • *In this paper, we propose a real-time face tracking system using an adaptive face detector and a tracking algorithm. An image is divided into the regions of background and face candidate by a real-time updated skin color identifying system in order to accurately detect facial features. The facial characteristics are extracted using the five types of simple Haar-like features. The extracted features are reinterpreted by Principal Component Analysis (PCA), and the interpreted principal components are processed by Support Vector Machine (SVM) that classifies into facial and non-facial areas. The movement of the face is traced by Kalman filter and Mean shift, which use the static information of the detected faces and the differences between previous and current frames. The proposed system identifies the initial skin color and updates it through a real-time color detecting system. A similar background color can be removed by updating the skin color. Also, the performance increases up to 20% when the background color is reduced in comparison to extracting features from the entire region. The increased detection rate and speed are acquired by the usage of Kalman filter and Mean shift.

Real-Time Automatic Human Face Detection and Recognition System Using Skin Colors of Face, Face Feature Vectors and Facial Angle Informations (얼굴피부색, 얼굴특징벡터 및 안면각 정보를 이용한 실시간 자동얼굴검출 및 인식시스템)

  • Kim, Yeong-Il;Lee, Eung-Ju
    • The KIPS Transactions:PartB
    • /
    • v.9B no.4
    • /
    • pp.491-500
    • /
    • 2002
  • In this paper, we propose a real-time face detection and recognition system by using skin color informations, geometrical feature vectors of face, and facial angle informations from color face image. The proposed algorithm improved face region extraction efficiency by using skin color informations on the HSI color coordinate and face edge information. And also, it improved face recognition efficiency by using geometrical feature vectors of face and facial angles from the extracted face region image. In the experiment, the proposed algorithm shows more improved recognition efficiency as well as face region extraction efficiency than conventional methods.

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Content-based Face Retrieval System using Wavelet and Neural Network (Wavelet과 신경망을 이용한 내용기반 얼굴 검색 시스템)

  • 강영미;정성환
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.3
    • /
    • pp.265-274
    • /
    • 2001
  • In this paper, we propose a content-based face retrieval system which can retrieve a face based on a facial feature region. Instead of using keyword such as a resident registration number or name for a query, the our system uses a facial image as a visual query. That is, we recognize a face based on a specific feature region including eyes, nose, and mouth. For this, we extract the feature region using the color information based on HSI color model and the edge information from wavelet transformed image, and then recognize the feature region using neural network. The proposed system is implemented on client/server environment based on Oracle DBMS for a large facial image database. In the experiment with 150 various facial images, the proposed method showed about 88.3% recognition rate.

  • PDF

Facial Contour Extraction in PC Camera Images using Active Contour Models (동적 윤곽선 모델을 이용한 PC 카메라 영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.633-638
    • /
    • 2005
  • The extraction of a face is a very important part for human interface, biometrics and security. In this paper, we applies DCM(Dilation of Color and Motion) filter and Active Contour Models to extract facial outline. First, DCM filter is made by applying morphology dilation to the combination of facial color image and differential image applied by dilation previously. This filter is used to remove complex background and to detect facial outline. Because Active Contour Models receive a large effect according to initial curves, we calculate rotational degree using geometric ratio of face, eyes and mouth. We use edgeness and intensity as an image energy, in order to extract outline in the area of weak edge. We acquire various head-pose images with both eyes from five persons in inner space with complex background. As an experimental result with total 125 images gathered by 25 per person, it shows that average extraction rate of facial outline is 98.1% and average processing time is 0.2sec.

  • PDF

A Study on the Individual Authentication Using Facial Information For Online Lecture (가상강의에 적용을 위한 얼굴영상정보를 이용한 개인 인증 방법에 관한 연구)

  • 김동현;권중장
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.117-120
    • /
    • 2000
  • In this paper, we suggest an authentication system for online lecture using facial information and a face recognition algorithm base on relation of face. First, a facial area on complex background is detected using color information. Second, features are extracted with edge profile. Third, compare it with the value of original facial image in database. By experiments, we know that the proposed system is an useful method for online lecture authentication system.

  • PDF

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF

Facial-feature Detection in Color Images using Chrominance Components and Mean-Gray Morphology Operation (색도정보와 Mean-Gray 모폴로지 연산을 이용한 컬러영상에서의 얼굴특징점 검출)

  • 강영도;양창우;김장형
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.714-720
    • /
    • 2004
  • In detecting human faces in color images, additional geometric computation is often necessary for validating the face-candidate regions having various forms. In this paper, we propose a method that detects the facial features using chrominance components of color which do not affected by face occlusion and orientation. The proposed algorithm uses the property that the Cb and Cr components have consistent differences around the facial features, especially eye-area. We designed the Mean-Gray Morphology operator to emphasize the feature areas in the eye-map image which generated by basic chrominance differences. Experimental results show that this method can detect the facial features under various face candidate regions effectively.

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

Facial Region Segmentation using Watershed Algorithm based on Depth Information (깊이정보 기반 Watershed 알고리즘을 이용한 얼굴영역 분할)

  • Kim, Jang-Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.4
    • /
    • pp.225-230
    • /
    • 2011
  • In this paper, we propose the segmentation method for detecting the facial region by using watershed based on depth information and merge algorithm. The method consists of three steps: watershed segmentation, seed region detection, and merge. The input color image is segmented into the small uniform regions by watershed. The facial region can be detected by merging the uniform regions with chromaticity and edge constraints. The problem in the existing method using only chromaticity or edge can solved by the proposed method. The computer simulation is performed to evaluate the performance of the proposed method. The simulation results shows that the proposed method is superior to segmentation facial region.