• Title/Summary/Keyword: Facial Feature Area

Search Result 64, Processing Time 0.024 seconds

ASM Algorithm Applid to Image Object spFACS Study on Face Recognition (영상객체 spFACS ASM 알고리즘을 적용한 얼굴인식에 관한 연구)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.1-12
    • /
    • 2016
  • Digital imaging technology has developed into a state-of-the-art IT convergence, composite industry beyond the limits of the multimedia industry, especially in the field of smart object recognition, face - Application developed various techniques have been actively studied in conjunction with the phone. Recently, face recognition technology through the object recognition technology and evolved into intelligent video detection recognition technology, image recognition technology object detection recognition process applies to skills through is applied to the IP camera, the image object recognition technology with face recognition and active research have. In this paper, we first propose the necessary technical elements of the human factor technology trends and look at the human object recognition based spFACS (Smile Progress Facial Action Coding System) for detecting smiles study plan of the image recognition technology recognizes objects. Study scheme 1). ASM algorithm. By suggesting ways to effectively evaluate psychological research skills through the image object 2). By applying the result via the face recognition object to the tooth area it is detected in accordance with the recognized facial expression recognition of a person demonstrated the effect of extracting the feature points.

Warping of 2D Facial Images Using Image Interpolation by Triangle Subdivision (삼각형 반복분할에 의한 영상 보간법을 활용한 2D 얼굴 영상의 변형)

  • Kim, Jin-Mo;Kim, Jong-Yoon;Cho, Hyung-Je
    • Journal of Korea Game Society
    • /
    • v.14 no.2
    • /
    • pp.55-66
    • /
    • 2014
  • Image warping is a technology to transform input images to be suitable for given conditions and has been recently utilized in changing face shape of characters in the field of movies or animation. Mesh warping which is one of warping methods that change shapes based on the features of face forms warping images by forming rectangular mesh groups around the eyes, nose, and mouth and matching them 1:1. This method has a problem in the resultant images are distorted in the segments of boundaries between meshes when there are errors in mesh control points or when meshes have been formed as many small area meshes. This study proposes a triangle based image interpolation technique to minimize the occurrence of errors in the process of forming natural warping images of face and process accurate results with a small amount of arithmetic operation and a short time. First, feature points that represent the face are found and these points are connected to form basic triangle meshes. The fact that the proposed method can reduce errors occurring in the process of warping while reducing the amount of arithmetic operation and time is shown through experiments.

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.

Local Context based Feature Extraction for Efficient Face Detection (효율적인 얼굴 검출을 위한 지역적 켄텍스트 기반의 특징 추출)

  • Rhee, Phill-Kyu;Xu, Yong Zhe;Shin, Hak-Chul;Shen, Yan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.1
    • /
    • pp.185-191
    • /
    • 2011
  • Recently, the surveillance system is highly being attention. Various Technologies as detecting object from image than determining and recognizing if the object are person are universally being used. Therefore, In this paper shows detecting on this kind of object and local context based facial feather detection algorithm is being advocated. Detect using Gabor Bunch in the same time Bayesian detection method for revision to find feather point is being described. The entire system to search for object area from image, context-based face detection, feature extraction methods applied to improve the performance of the system.

Face Recognition Based on Polar Coordinate Transform (극좌표계 변환에 기반한 얼굴 인식 방법)

  • Oh, Jae-Hyun;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.44-52
    • /
    • 2010
  • In this paper, we propose a novel method for face recognition which uses polar coordinate instead of conventional cartesian coordinate. Among the central area of a face, we select a point as a pole and make a polar image of a face by evenly sampling pixels in each direction of 360 degrees around the pole. By applying conventional feature extraction methods to the polar image, the recognition rates are improved. The polar coordinate delineates near-pole area more vividly than the area far from the pole. In a face, important regions such as eyes, nose and mouth are concentrated on the central part of a face. Therefore, the polar coordinate of a face image can achieve more vivid representation of important facial regions compared to the conventional cartesian coordinate. The proposed polar coordinate transform was applied to Yale and FRGC databases and LDA and NLDA were used to extract features afterwards. The experimental results show that the proposed method performs better than the conventional cartesian images.

A Study on Face Image Recognition Using Feature Vectors (특징벡터를 사용한 얼굴 영상 인식 연구)

  • Kim Jin-Sook;Kang Jin-Sook;Cha Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.4
    • /
    • pp.897-904
    • /
    • 2005
  • Face Recognition has been an active research area because it is not difficult to acquire face image data and it is applicable in wide range area in real world. Due to the high dimensionality of a face image space, however, it is not easy to process the face images. In this paper, we propose a method to reduce the dimension of the facial data and extract the features from them. It will be solved using the method which extracts the features from holistic face images. The proposed algorithm consists of two parts. The first is the using of principal component analysis (PCA) to transform three dimensional color facial images to one dimensional gray facial images. The second is integrated linear discriminant analusis (PCA+LDA) to prevent the loss of informations in case of performing separated steps. Integrated LDA is integrated algorithm of PCA for reduction of dimension and LDA for discrimination of facial vectors. First, in case of transformation from color image to gray image, PCA(Principal Component Analysis) is performed to enhance the image contrast to raise the recognition rate. Second, integrated LDA(Linear Discriminant Analysis) combines the two steps, namely PCA for dimensionality reduction and LDA for discrimination. It makes possible to describe concise algorithm expression and to prevent the information loss in separate steps. To validate the proposed method, the algorithm is implemented and tested on well controlled face databases.

Development of Virtual Makeup Tool based on Mobile Augmented Reality

  • Song, Mi-Young;Kim, Young-Sun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.127-133
    • /
    • 2021
  • In this study, an augmented reality-based make-up tool was built to analyze the user's face shape based on face-type reference model data and to provide virtual makeup by providing face-type makeup. To analyze the face shape, first recognize the face from the image captured by the camera, then extract the features of the face contour area and use them as analysis properties. Next, the feature points of the extracted face contour area are normalized to compare with the contour area characteristics of each face reference model data. Face shape is predicted and analyzed using the distance difference between the feature points of the normalized contour area and the feature points of the each face-type reference model data. In augmented reality-based virtual makeup, in the image input from the camera, the face is recognized in real time to extract the features of each area of the face. Through the face-type analysis process, you can check the results of virtual makeup by providing makeup that matches the analyzed face shape. Through the proposed system, We expect cosmetics consumers to check the makeup design that suits them and have a convenient and impact on their decision to purchase cosmetics. It will also help you create an attractive self-image by applying facial makeup to your virtual self.

Face region detection algorithm of natural-image (자연 영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.7 no.1
    • /
    • pp.55-60
    • /
    • 2014
  • In this paper, we proposed a method for face region extraction by skin-color hue, saturation and facial feature extraction in natural images. The proposed algorithm is composed of lighting correction and face detection process. In the lighting correction step, performing correction function for a lighting change. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. Eye detection using C element in the CMY color model and mouth detection using Q element in the YIQ color model for extracted candidate areas. Face area detected based on human face knowledge for extracted candidate areas. When an experiment was conducted with 10 natural images of face as input images, the method showed a face detection rate of 100%.

Association of Nose Size and Shapes with Self-rated Health and Mibyeong (코의 크기 및 형태와 자가건강, 미병과의 상관성)

  • Ahn, Ilkoo;Bae, Kwang-Ho;Jin, Hee-Jeong;Lee, Siwoo
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.35 no.6
    • /
    • pp.267-273
    • /
    • 2021
  • Mibyeong (sub-health) is a concept that represents the sub-health in traditional East Asian medicine. Assuming that the nose sizes and shapes are related to respiratory function, in this study, we hypothesized that the nose size and shape features are related to the self-rated health (SRH) level and self-rated Mibyeong severity, and aimed to assess this relationship using a fully automated image analysis system. The nose size features were evaluated from the frontal and profile face images of 810 participants. The nose size features consisted of five length features, one area feature, and one volume feature. The level of SRH and the Mibyeong severity were determined using a questionnaire. The normalized nasal height was negatively associated with the self-rated health score (SRHS) (partial ρ = -0.125, p = 3.53E-04) and the Mibyeong score (MBS) (partial ρ = -.172, p = 9.38E-07), even after adjustment for sex, age, and body mass index. The normalized nasal volume (ρ = -.105, p = 0.003), the normalized nasal tip protrusion length (ρ = -.087, p = 0.014), and the normalized nares width (ρ = -.086, p = .015) showed significant correlation with the SRHS. The normalized nasal area (ρ = -.118, p = 0.001), the normalized nasal volume (ρ = -.107, p = .002) showed significant correlation with the MBS. The wider, longer, and larger the nose, the lower the SRHS and MBS, indicating that health status can be estimated based on the size and shape features of the nose.

3D Face Recognition using Cumulative Histogram of Surface Curvature (표면곡률의 누적히스토그램을 이용한 3차원 얼굴인식)

  • 이영학;배기억;이태흥
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.605-616
    • /
    • 2004
  • A new practical implementation of a facial verification system using cumulative histogram of surface curvatures for the local and contour line areas is proposed, in this paper. The approach works by finding the nose tip that has a protrusion shape on the face. In feature recognition of 3D face images, one has to take into consideration the orientated frontal posture to normalize after extracting face area from the original image. The feature vectors are extracted by using the cumulative histogram which is calculated from the curvature of surface for the contour line areas: 20, 30 and 40, and nose, mouth and eyes regions, which has depth and surface characteristic information. The L1 measure for comparing two feature vectors were used, because it was simple and robust. In the experimental results, the maximum curvature achieved recognition rate of 96% among the proposed methods.