• Title/Summary/Keyword: 얼굴 회전

Search Result 121, Processing Time 0.026 seconds

Face Detection Using A Selectively Attentional Hough Transform and Neural Network (선택적 주의집중 Hough 변환과 신경망을 이용한 얼굴 검출)

  • Choi, Il;Seo, Jung-Ik;Chien, Sung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.93-101
    • /
    • 2004
  • A face boundary can be approximated by an ellipse with five-dimensional parameters. This property allows an ellipse detection algorithm to be adapted to detecting faces. However, the construction of a huge five-dimensional parameter space for a Hough transform is quite unpractical. Accordingly, we Propose a selectively attentional Hough transform method for detecting faces from a symmetric contour in an image. The idea is based on the use of a constant aspect ratio for a face, gradient information, and scan-line-based orientation decomposition, thereby allowing a 5-dimensional problem to be decomposed into a two-dimensional one to compute a center with a specific orientation and an one-dimensional one to estimate a short axis. In addition, a two-point selection constraint using geometric and gradient information is also employed to increase the speed and cope with a cluttered background. After detecting candidate face regions using the proposed Hough transform, a multi-layer perceptron verifier is adopted to reject false positives. The proposed method was found to be relatively fast and promising.

Deep learning based face mask recognition for access control (출입 통제에 활용 가능한 딥러닝 기반 마스크 착용 판별)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.8
    • /
    • pp.395-400
    • /
    • 2020
  • Coronavirus disease 2019 (COVID-19) was identified in December 2019 in China and has spread globally, resulting in an ongoing pandemic. Because COVID-19 is spread mainly from person to person, every person is required to wear a facemask in public. On the other hand, many people are still not wearing facemasks despite official advice. This paper proposes a method to predict whether a human subject is wearing a facemask or not. In the proposed method, two eye regions are detected, and the mask region (i.e., face regions below two eyes) is predicted and extracted based on the two eye locations. For more accurate extraction of the mask region, the facial region was aligned by rotating it such that the line connecting the two eye centers was horizontal. The mask region extracted from the aligned face was fed into a convolutional neural network (CNN), producing the classification result (with or without a mask). The experimental result on 186 test images showed that the proposed method achieves a very high accuracy of 98.4%.

Affine Invariant Local Descriptors for Face Recognition (얼굴인식을 위한 어파인 불변 지역 서술자)

  • Gao, Yongbin;Lee, Hyo Jong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.375-380
    • /
    • 2014
  • Under controlled environment, such as fixed viewpoints or consistent illumination, the performance of face recognition is usually high enough to be acceptable nowadays. Face recognition is, however, a still challenging task in real world. SIFT(Scale Invariant Feature Transformation) algorithm is scale and rotation invariant, which is powerful only in the case of small viewpoint changes. However, it often fails when viewpoint of faces changes in wide range. In this paper, we use Affine SIFT (Scale Invariant Feature Transformation; ASIFT) to detect affine invariant local descriptors for face recognition under wide viewpoint changes. The ASIFT is an extension of SIFT algorithm to solve this weakness. In our scheme, ASIFT is applied only to gallery face, while SIFT algorithm is applied to probe face. ASIFT generates a series of different viewpoints using affine transformation. Therefore, the ASIFT allows viewpoint differences between gallery face and probe face. Experiment results showed our framework achieved higher recognition accuracy than the original SIFT algorithm on FERET database.

A Study on Local Micro Pattern for Facial Expression Recognition (얼굴 표정 인식을 위한 지역 미세 패턴 기술에 관한 연구)

  • Jung, Woong Kyung;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Convergence Security Journal
    • /
    • v.14 no.5
    • /
    • pp.17-24
    • /
    • 2014
  • This study proposed LDP (Local Directional Pattern) as a new local micro pattern for facial expression recognition to solve noise sensitive problem of LBP (Local Binary Pattern). The proposed method extracts 8-directional components using $m{\times}m$ mask to solve LBP's problem and choose biggest k components, each chosen component marked with 1 as a bit, otherwise 0. Finally, generates a pattern code with bit sequence as 8-directional components. The result shows better performance of rotation and noise adaptation. Also, a new local facial feature can be developed to present both PFF (permanent Facial Feature) and TFF (Transient Facial Feature) based on the proposed method.

Nonlinear feature extraction for regression problems (회귀문제를 위한 비선형 특징 추출 방법)

  • Kim, Seongmin;Kwak, Nojun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.11a
    • /
    • pp.86-88
    • /
    • 2010
  • 본 논문에서는 회귀문제를 위한 비선형 특징 추출방법을 제안하고 분류문제에 적용한다. 이 방법은 이미 제안된 선형판별 분석법을 회귀문제에 적용한 회귀선형판별분석법(Linear Discriminant Analysis for regression:LDAr)을 비선형 문제에 대해 확장한 것이다. 본 논문에서는 이를 위해 커널함수를 이용하여 비선형 문제로 확장하였다. 기본적인 아이디어는 입력 특징 공간을 커널 함수를 이용하여 새로운 고차원의 특징 공간으로 확장을 한 후, 샘플 간의 거리가 큰 것과 작은 것의 비율을 최대화하는 것이다. 일반적으로 얼굴 인식과 같은 응용 분야에서 얼굴의 크기, 회전과 같은 것들은 회귀문제에 있어서 비선형적이며 복잡한 문제로 인식되고 있다. 본 논문에서는 회귀 문제에 대한 간단한 실험을 수행하였으며 회귀선형판별분석법(LDAr)을 이용한 결과보다 향상된 결과를 얻을 수 있었다.

  • PDF

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.

Object Tracking System for Additional Service Providing under Interactive Broadcasting Environment (대화형 방송 환경에서 부가서비스 제공을 위한 객체 추적 시스템)

  • Ahn, Jun-Han;Byun, Hye-Ran
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.1
    • /
    • pp.97-107
    • /
    • 2002
  • In general, under interactive broadcasting environment, user finds additional service using top-down menu. However, user can't know that additional service provides information until retrieval has finished and top-down menu requires multi-level retrieval. This paper proposes the new method for additional service providing not using top-down menu but using object selection. For the purpose of this method, the movie of a MPEG should be synchronized with the object information(position, size, shape) and object tracking technique is required. Synchronization technique uses the Directshow provided by the Microsoft. Object tracking techniques use a motion-based tracking and a model-based tracking together. We divide object into two parts. One is face and the other is substance. Face tracking uses model-based tracking and Substance uses motion-based tracking base on the block matching algorithm. To improve precise tracking, motion-based tracking apply the temporal prediction search algorithm and model-based tracking apply the face model which merge ellipse model and color model.

Proposing Shape Alignment for an Improved Active Shape Model (ASM의 성능향상을 위한 형태 정렬 방식 제안)

  • Hahn, Hee-Il
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.63-70
    • /
    • 2012
  • In this paper an extension to an original active shape model(ASM) for facial feature extraction is presented. The original ASM suffers from poor shape alignment by aligning the shape model to a new instant of the object in a given image using a simple similarity transformation. It exploits only informations such as scale, rotation and shift in horizontal and vertical directions, which does not cope effectively with the complex pose variation. To solve the problem, new shape alignment with 6 degrees of freedom is derived, which corresponds to an affine transformation. Another extension is to speed up the calculation of the Mahalanobis distance for 2-D profiles by trimming the profile covariance matrices. Extensive experiment is conducted with several images of varying poses to check the performance of the proposed method to segment the human faces.

Estimation of a Driver's Physical Condition Using Real-time Vision System (실시간 비전 시스템을 이용한 운전자 신체적 상태 추정)

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Moon, Chan-Woo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.213-224
    • /
    • 2009
  • This paper presents a new algorithm for estimating a driver's physical condition using real-time vision system and performs experimentation for real facial image data. The system relies on a face recognition to robustly track the center points and sizes of person's two pupils, and two side edge points of the mouth. The face recognition constitutes the color statistics by YUV color space together with geometrical model of a typical face. The system can classify the rotation in all viewing directions, to detect eye/mouth occlusion, eye blinking and eye closure, and to recover the three dimensional gaze of the eyes. These are utilized to determine the carelessness and drowsiness of the driver. Finally, experimental results have demonstrated the validity and the applicability of the proposed method for the estimation of a driver's physical condition.

  • PDF

Facial Feature Extraction in Reduced Image using Generalized Symmetry Transform (일반화 대칭 변환을 이용한 축소 영상에서의 얼굴특징추출)

  • Paeng, Young-Hye;Jung, Sung-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.569-576
    • /
    • 2000
  • The GST can extract the position of facial features without a prior information in an image. However, this method requires a plenty of the processing time because the mask size to process GST must be larger than the size of object such as eye, mouth and nose in an image. In addition, it has the complexity for the computation of middle line to decide facial features. In this paper, we proposed two methods to overcome these disadvantage of the conventional method. First, we used the reduced image having enough information instead of an original image to decrease the processing time. Second, we used the extracted peak positions instead of the complex statistical processing to get the middle lines. To analyze the performance of the proposed method, we tested 200 images including, the front, rotated, spectacled, and mustached facial images. In result, the proposed method shows 85% in the performance of feature extraction and can reduce the processing time over 53 times, compared with existing method.

  • PDF