• Title/Summary/Keyword: face feature

Search Result 883, Processing Time 0.034 seconds

Face Feature Extraction for Face Recognition (얼굴 인식을 위한 얼굴 특징점 추출)

  • Yang, Ryong;Chae, Duk-Jae;Lee, Sang-Bum
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.12
    • /
    • pp.1765-1774
    • /
    • 2002
  • A face recognition is currently the field which many research have been processed actively. But many problems must be solved the previous problem. First, We must recognize the face of the object taking a location various lighting change and change of the camera into account. In this paper, we proposed that new method to fund feature within fast and correct computation time after scanning PC camera and ID card picture. It converted RGB color space to YUV. A face skin color extracts which equalize a histogram of Y ingredient without the luminance. After, the method use V' ingredient which transformes V ingredient of YUV and then find the face feature. The reult of the experiment shows getting correct input face image from ID Card picture and PC camera.

  • PDF

A 3D Face Reconstruction Based on the Symmetrical Characteristics of Side View 2D Face Images (측면 2차원 얼굴 영상들의 대칭성을 이용한 3차원 얼굴 복원)

  • Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.103-110
    • /
    • 2011
  • A widely used 3D face reconstruction method, structure from motion(SfM), shows robust performance when frontal, left, and right face images are used. However, this method cannot reconstruct a self-occluded facial part correctly when only one side view face images are used because only partial facial feature points can be used in this case. In order to solve the problem, the proposed method exploit a constrain that is bilateral symmetry of human faces in order to generate bilateral facial feature points and use both input facial feature points and generated facial feature points to reconstruct a 3D face. For quantitative evaluation of the proposed method, 3D faces were obtained from a 3D face scanner and compared with the reconstructed 3D faces. The experimental results show that the proposed 3D face reconstruction method based on both facial feature points outperforms the previous 3D face reconstruction method based on only partial facial feature points.

A New Face Morphing Method using Texture Feature-based Control Point Selection Algorithm and Parallel Deep Convolutional Neural Network (텍스처 특징 기반 제어점 선택 알고리즘과 병렬 심층 컨볼루션 신경망을 이용한 새로운 얼굴 모핑 방법)

  • Park, Jin Hyeok;Khan, Rafiul Hasan;Lim, Seon-Ja;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.176-188
    • /
    • 2022
  • In this paper, we propose a compact method for anthropomorphism that uses Deep Convolutional Neural Networks (DCNN) to detect the similarities between a human face and an animal face. We also apply texture feature-based morphing between them. We propose a basic texture feature-based morphing system for morphing between human faces only. The entire anthropomorphism process starts with the creation of an animal face classifier using a parallel DCNN that determines the most similar animal face to a given human face. The significance of our network is that it contains four sets of convolutional functions that run in parallel, allowing it to extract more features than a linear DCNN network. Our employed texture feature algorithm-based automatic morphing system recognizes the facial features of the human face and takes the Control Points automatically, rather than the traditional human aiding manual morphing system, once the similarity was established. The simulation results show that our suggested DCNN surpasses its competitors with a 92.0% accuracy rate. It also ensures that the most similar animal classes are found, and the texture-based morphing technology automatically completes the morphing process, ensuring a smooth transition from one image to another.

A Robust Hybrid Method for Face Recognition Under Illumination Variation (조명 변이에 강인한 하이브리드 얼굴 인식 방법)

  • Choi, Sang-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.10
    • /
    • pp.129-136
    • /
    • 2015
  • We propose a hybrid face recognition to deal with illumination variation. For this, we extract discriminant features by using the different illumination invariant feature extraction methods. In order to utilize both advantages of each method, we evaluate the discriminant power of each feature by using the discriminant distance and then construct a composite feature with only the features that contain a large amount of discriminative information. The experimental results for the Multi-PIE, Yale B, AR and yale databases show that the proposed method outperforms an individual illumination invariant feature extraction method for all the databases.

A design and implementation of Face Detection hardware (얼굴 검출을 위한 SoC 하드웨어 구현 및 검증)

  • Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.4
    • /
    • pp.43-54
    • /
    • 2007
  • This paper presents design and verification of a face detection hardware for real time application. Face detection algorithm detects rough face position based on already acquired feature parameter data. The hardware is composed of five main modules: Integral Image Calculator, Feature Coordinate Calculator, Feature Difference Calculator, Cascade Calculator, and Window Detection. It also includes on-chip Integral Image memory and Feature Parameter memory. The face detection hardware was verified by using S3C2440A CPU of Samsung Electronics, Virtex4LX100 FPGA of Xilinx, and a CCD Camera module. Our design uses 3,251 LUTs of Xilinx FPGA and takes about 1.96${\sim}$0.13 sec for face detection depending on sliding-window step size, when synthesized for Virtex4LX100 FPGA. When synthesized on Magnachip 0.25um ASIC library, it uses about 410,000 gates (Combinational area about 345,000 gates, Noncombinational area about 65,000 gates) and takes less than 0.5 sec for face realtime detection. This size and performance shows that it is adequate to use for embedded system applications. It has been fabricated as a real chip as a part of XF1201 chip and proven to work.

Face Detection Using Skin Color and Geometrical Constraints of Facial Features (살색과 얼굴 특징들의 기하학적 제한을 이용한 얼굴 위치 찾기)

  • Cho, Kyung-Min;Hong, Ki-Sang
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.12
    • /
    • pp.107-119
    • /
    • 1999
  • There is no authentic solution in a face detection problem though it is an important part of pattern recognition and has many diverse application fields. The reason is that there are many unpredictable deformations due to facial expressions, view point, rotation, scale, gender, age, etc. To overcome these problems, we propose an algorithm based on feature-based method, which is well known to be robust to these deformations. We detect a face by calculating a similarity between the formation of real face feature and candidate feature formation which consists of eyebrow, eye, nose, and mouth. In this paper, we use a steerable filter instead of general derivative edge detector in order to get more accurate feature components. We applied deformable template to verify the detected face, which overcome the weak point of feature-based method. Considering the low detection rate because of face detection method using whole input images, we design an adaptive skin-color filter which can be applicable to a diverse skin color, minimizing target area and processing time.

  • PDF

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Face Detection Using Pixel Direction Code and Look-Up Table Classifier (픽셀 방향코드와 룩업테이블 분류기를 이용한 얼굴 검출)

  • Lim, Kil-Taek;Kang, Hyunwoo;Han, Byung-Gil;Lee, Jong Taek
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.5
    • /
    • pp.261-268
    • /
    • 2014
  • Face detection is essential to the full automation of face image processing application system such as face recognition, facial expression recognition, age estimation and gender identification. It is found that local image features which includes Haar-like, LBP, and MCT and the Adaboost algorithm for classifier combination are very effective for real time face detection. In this paper, we present a face detection method using local pixel direction code(PDC) feature and lookup table classifiers. The proposed PDC feature is much more effective to dectect the faces than the existing local binary structural features such as MCT and LBP. We found that our method's classification rate as well as detection rate under equal false positive rate are higher than conventional one.

Analysis of CIELuv Color feature for the Segmentation of the Lip Region (입술영역 분할을 위한 CIELuv 칼라 특징 분석)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.27-34
    • /
    • 2019
  • In this paper, a new type of lip feature is proposed as distance metric in CIELUV color system. The performance of the proposed feature was tested on face image database, Helen dataset from University of Illinois. The test processes consists of three steps. The first step is feature extraction and second step is principal component analysis for the optimal projection of a feature vector. The final step is Otsu's threshold for a two-class problem. The performance of the proposed feature was better than conventional features. Performance metrics for the evaluation are OverLap and Segmentation Error. Best performance for the proposed feature was OverLap of 65% and 59 % of segmentation error. Conventional methods shows 80~95% for OverLap and 5~15% of segmentation error usually. In conventional cases, the face database is well calibrated and adjusted with the same background and illumination for the scene. The Helen dataset used in this paper is not calibrated or adjusted at all. These images are gathered from internet and therefore, there are no calibration and adjustment.