• Title/Summary/Keyword: Face Alignment

Search Result 62, Processing Time 0.028 seconds

Tiny and Blurred Face Alignment for Long Distance Face Recognition

  • Ban, Kyu-Dae;Lee, Jae-Yeon;Kim, Do-Hyung;Kim, Jae-Hong;Chung, Yun-Koo
    • ETRI Journal
    • /
    • v.33 no.2
    • /
    • pp.251-258
    • /
    • 2011
  • Applying face alignment after face detection exerts a heavy influence on face recognition. Many researchers have recently investigated face alignment using databases collected from images taken at close distances and with low magnification. However, in the cases of home-service robots, captured images generally are of low resolution and low quality. Therefore, previous face alignment research, such as eye detection, is not appropriate for robot environments. The main purpose of this paper is to provide a new and effective approach in the alignment of small and blurred faces. We propose a face alignment method using the confidence value of Real-AdaBoost with a modified census transform feature. We also evaluate the face recognition system to compare the proposed face alignment module with those of other systems. Experimental results show that the proposed method has a high recognition rate, higher than face alignment methods using a manually-marked eye position.

3D Face Alignment and Normalization Based on Feature Detection Using Active Shape Models : Quantitative Analysis on Aligning Process (ASMs을 이용한 특징점 추출에 기반한 3D 얼굴데이터의 정렬 및 정규화 : 정렬 과정에 대한 정량적 분석)

  • Shin, Dong-Won;Park, Sang-Jun;Ko, Jae-Pil
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.403-411
    • /
    • 2008
  • The alignment of facial images is crucial for 2D face recognition. This is the same to facial meshes for 3D face recognition. Most of the 3D face recognition methods refer to 3D alignment but do not describe their approaches in details. In this paper, we focus on describing an automatic 3D alignment in viewpoint of quantitative analysis. This paper presents a framework of 3D face alignment and normalization based on feature points obtained by Active Shape Models (ASMs). The positions of eyes and mouth can give possibility of aligning the 3D face exactly in three-dimension space. The rotational transform on each axis is defined with respect to the reference position. In aligning process, the rotational transform converts an input 3D faces with large pose variations to the reference frontal view. The part of face is flopped from the aligned face using the sphere region centered at the nose tip of 3D face. The cropped face is shifted and brought into the frame with specified size for normalizing. Subsequently, the interpolation is carried to the face for sampling at equal interval and filling holes. The color interpolation is also carried at the same interval. The outputs are normalized 2D and 3D face which can be used for face recognition. Finally, we carry two sets of experiments to measure aligning errors and evaluate the performance of suggested process.

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.

Robust Face Alignment using Progressive AAM (점진적 AAM을 이용한 강인한 얼굴 윤곽 검출)

  • Kim, Dae-Hwan;Kim, Jae-Min;Cho, Seong-Won;Jang, Yong-Suk;Kim, Boo-Gyoun;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.11-20
    • /
    • 2007
  • AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In this paper, we propose a face alignment method using progressive AAM. The proposed method consists of two stages; modelling and relation derivation stage and fitting stage. Modelling and relation derivation stage first builds two AAM models; the inner face AAM model and the whole face AAM model and then derive the relation matrix between the inner face AAM model parameter vector and the whole face AAM model parameter vector. The fitting stage is processed progressively in two phases. In the first phase, the proposed method finds the feature parameters for the inner facial feature points of a new face, and then in the second phase it localizes the whole facial feature points of the new face using the initial values estimated utilizing the inner feature parameters obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment method is more robust with respect to pose, and face background than the conventional basic AAM-based face alignment.

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

3D Active Appearance Model for Face Recognition (얼굴인식을 위한 3D Active Appearance Model)

  • Cho, Kyoung-Sic;Kim, Yong-Guk
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1006-1011
    • /
    • 2007
  • Active Appearance Models은 객체의 모델링에 널리 사용되며, 특히 얼굴 모델은 얼굴 추적, 포즈 인식, 표정 인식, 그리고 얼굴 인식에 널리 사용되고 있다. 최초의 AAM은 Shape과 Appearance가 하나의 계수에 의해서 만들어 지는 Combined AAM이였고, 이후 Shape과 Appearance의 계수가 분리된 Independent AAM과 3D를 표현할 수 있는 Combined 2D+3D AAM이 개발 되었다. 비록 Combined 2D+3D AAM이 3D를 표현 할 수 있을지라도 이들은 공통적으로 2D 영상을 사용하여 모델을 생산한다. 본 논문에서 우리는 stereo-camera based 3D face capturing device를 통해 획득한 3D 데이터를 기반으로 하는 3D AAM을 제안한다. 우리의 3D AAM은 3D정보를 이용해 모델을 생산하므로 기존의 AAM보다 정확한 3D표현이 가능하고 Alignment Algorithm으로 Inverse Compositional Image Alignment(ICIA)를 사용하여 빠르게 Model Instance를 생산할 수 있다. 우리는 3D AAM을 평가하기 위해 stereo-camera based 3D face capturing device로 촬영해 수집한 한국인 얼굴 데이터베이스[9]로 얼굴인식을 수행하였다.

  • PDF

Jewelry Model Cast Elements Evolution with Alignment Angle in DuraForm Rapid Prototyping (쾌속조형 듀라폼 성형체에서의 배치각 변화에 따른 주얼리주조모형의 형상요소변화)

  • Joo, Young-Cheol;Song, Oh-Sung
    • Journal of Korea Foundry Society
    • /
    • v.21 no.5
    • /
    • pp.290-295
    • /
    • 2001
  • We fabricated test samples containing various shape elements and surface roughness checking points for the jewelry cast master patterns by employing the 3D computer aided design (CAD), selective laser sintering (SLS) rapid prototype (RP) with the DuraForm powders. We varied the alignment angle from $0^{\circ}$ to $10^{\circ}$ at a given layer thickness of 0.08 and 0.1mm, respectively, in RP operation. Dimensions of the shape elements as well as values of surface roughness are characterized by an optical microscope and a contact-scanning profilometer. Surface roughness values of the top and vertical face increased as the alignment angle increased, while the other roughness values and shape elements variation were not depending on the alignment angle. The resolution of the shape realization was enhanced as the layer thickness became smaller. The minimum diameter of the hole, common in jewelry design, was 1.2 mm, and the shrinkage became 12% at the 1.6 mm-diameter hole, Our results implied that we face down the proposed design elements with $0^{\circ}$ alignment angle, and consider the shrinkage effect of each shape element in DuraForm RP jewelry modeling.

  • PDF

Detection of video editing points using facial keypoints (얼굴 특징점을 활용한 영상 편집점 탐지)

  • Joshep Na;Jinho Kim;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, various services using artificial intelligence(AI) are emerging in the media field as well However, most of the video editing, which involves finding an editing point and attaching the video, is carried out in a passive manner, requiring a lot of time and human resources. Therefore, this study proposes a methodology that can detect the edit points of video according to whether person in video are spoken by using Video Swin Transformer. First, facial keypoints are detected through face alignment. To this end, the proposed structure first detects facial keypoints through face alignment. Through this process, the temporal and spatial changes of the face are reflected from the input video data. And, through the Video Swin Transformer-based model proposed in this study, the behavior of the person in the video is classified. Specifically, after combining the feature map generated through Video Swin Transformer from video data and the facial keypoints detected through Face Alignment, utterance is classified through convolution layers. In conclusion, the performance of the image editing point detection model using facial keypoints proposed in this paper improved from 87.46% to 89.17% compared to the model without facial keypoints.

Facial Landmark Detection by Stacked Hourglass Network with Transposed Convolutional Layer (Transposed Convolutional Layer 기반 Stacked Hourglass Network를 이용한 얼굴 특징점 검출에 관한 연구)

  • Gu, Jungsu;Kang, Ho Chul
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1020-1025
    • /
    • 2021
  • Facial alignment is very important task for human life. And facial landmark detection is one of the instrumental methods in face alignment. We introduce the stacked hourglass networks with transposed convolutional layers for facial landmark detection. our method substitutes nearest neighbor upsampling for transposed convolutional layer. Our method returns better accuracy in facial landmark detection compared to stacked hourglass networks with nearest neighbor upsampling.

Reconstruction of Neural Circuits Using Serial Block-Face Scanning Electron Microscopy

  • Kim, Gyu Hyun;Lee, Sang-Hoon;Lee, Kea Joo
    • Applied Microscopy
    • /
    • v.46 no.2
    • /
    • pp.100-104
    • /
    • 2016
  • Electron microscopy is currently the only available technique with a spatial resolution sufficient to identify fine neuronal processes and synaptic structures in densely packed neuropil. For large-scale volume reconstruction of neuronal connectivity, serial block-face scanning electron microscopy allows us to acquire thousands of serial images in an automated fashion and reconstruct neural circuits faster by reducing the alignment task. Here we introduce the whole reconstruction procedure of synaptic network in the rat hippocampal CA1 area and discuss technical issues to be resolved for improving image quality and segmentation. Compared to the serial section transmission electron microscopy, serial block-face scanning electron microscopy produced much reliable three-dimensional data sets and accelerated reconstruction by reducing the need of alignment and distortion adjustment. This approach will generate invaluable information on organizational features of our connectomes as well as diverse neurological disorders caused by synaptic impairments.