• 제목/요약/키워드: Facial Images

검색결과 631건 처리시간 0.027초

Face Recognition Using a Facial Recognition System

  • Almurayziq, Tariq S;Alazani, Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.280-286
    • /
    • 2022
  • Facial recognition system is a biometric manipulation. Its applicability is simpler, and its work range is broader than fingerprints, iris scans, signatures, etc. The system utilizes two technologies, such as face detection and recognition. This study aims to develop a facial recognition system to recognize person's faces. Facial recognition system can map facial characteristics from photos or videos and compare the information with a given facial database to find a match, which helps identify a face. The proposed system can assist in face recognition. The developed system records several images, processes recorded images, checks for any match in the database, and returns the result. The developed technology can recognize multiple faces in live recordings.

반복적인 PCA 재구성을 이용한 얼굴 영상에서의 안경 제거 (Glasses Removal from Facial Images with Recursive PCA Reconstruction)

  • 오유화;안상철;김형곤;김익재;이성환
    • 대한전자공학회논문지SP
    • /
    • 제41권3호
    • /
    • pp.35-49
    • /
    • 2004
  • 본 논문은 칼라의 정면 얼굴 영상으로부터 흑백의 안경 제거 영상을 얻을 수 있는 반복적인 PCA(Principal Component Analysis) 재구성 방법에 대해 제안한다. 제안된 방법은 먼저 칼라의 입력 영상으로부터 색상 정보와 형태 정보를 이용하여 일정한 크기의 흑백 영상으로 정규화 한다. 정규화된 얼굴 영상에서 반복적인 PCA 재구성 과정을 통해 안경에 의한 오클루젼 영역을 찾고 동시에 이를 보정할 수 있는 재구성된 영상을 생성한다. 또한 이들 결과 영상들을 이용하여 자동으로 자연스러운 안경 제거 영상을 만들어준다. 제안된 방법을 가지고 실제 안경이 있는 영상들에 적용한 결과 대부분 입력 영상과 유사하면서도 자연스러운 안경 제거 영상을 얻을 수 있었다. 본 논문에서 제안하는 방법은 보완을 통해 다른 오클루젼 문제를 해결하는데에서도 다양하게 응용될 수 있고, 자동 얼굴 인식 시스템의 인식 효율을 높이는 데 충분히 기여할 수 있으리라 기대한다.

PCA 표상을 이용한 강인한 얼굴 표정 인식 (Robust Facial Expression Recognition using PCA Representation)

  • 신영숙
    • 인지과학
    • /
    • 제16권4호
    • /
    • pp.323-331
    • /
    • 2005
  • 본 논문은 조명 변화에 강인하며 중립 표정과 같은 표정 측정의 기준이 되는 단서 없이 다양한 내적상태 안에서 얼굴표정을 인식할 수 있는 개선된 시스템을 제안한다. 표정정보를 추출하기 위한 전처리 작업으로, 백색화(whitening) 단계가 적용되었다. 백색화 단계는 영상데이터들의 평균값이 0이며 단위분산 값으로 균일한 분포를 갖도록 하여 조명 변화에 대한 민감도를 줄인다. 백색화 단계 수행 후 제 1 주성분이 제외된 나머지 주성분들로 이루어진 PCA표상을 표정정보로 사용함으로써 중립 표정에 대한 단서 없이 얼굴표정의 특징추출을 가능하게 한다. 본 실험 결과는 또한 83개의 내적상태와 일치되는 다양한 얼굴표정들에서 임의로 선택된 표정영상들을 내적상태의 차원모델에 기반한 얼굴표정 인식을 수행함으로써 다양하고 자연스런 얼굴 표정 인식을 가능하게 하였다.

  • PDF

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • 제9권1호
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현 (Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application)

  • 박경숙;허영남;김응곤
    • 한국정보통신학회논문지
    • /
    • 제6권8호
    • /
    • pp.1310-1318
    • /
    • 2002
  • 본 연구에서는 기존의 방법에 비하여 값비싼 3차원 스캐너나 카메라를 이용하지 않고 정면과 측면 영상을 이용하여 3차원 모델을 생성하는 3차원 얼굴 모델러와 애니메이터를 개발하였다. 이 시스템은 특정한 플랫폼과 소프트웨어에 독립적으로 웹상에서 애니메이션 서버에 접속함으로써 3차원 얼굴 모델을 애니메이션 할 수 있으며 자바 3D API를 이용하여 구현하였다. 얼굴모델러는 입력 영상으로부터 MPEG-4 FDP(Facial Definition Parameter) 특징점을 추출하여 일반 얼굴모델을 특징점에 따라 변형시켜 3차원 얼굴 모델을 생성한다 애니메이터는 FAP(Facial Animation Parameter)에 따라 얼굴모델을 애니메이션하고 렌더링한다. 본 시스템은 웹 상에서 아바타를 제작하는 데 사용될 수 있다.

예제학습 방법에 기반한 저해상도 얼굴 영상 복원 (Face Hallucination based on Example-Learning)

  • 이준태;김재협;문영식
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2008년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.292-293
    • /
    • 2008
  • In this paper, we propose a face hallucination method based on example-learning. The traditional approach based on example-learning requires alignment of face images. In the proposed method, facial images are segmented into patches and the weights are computed to represent input low resolution facial images into weighted sum of low resolution example images. High resolution facial images are hallucinated by combining the weight vectors with the corresponding high resolution patches in the training set. Experimental results show that the proposed method produces more reliable results of face hallucination than the ones by the traditional approach based on example-learning.

  • PDF

얼굴뼈 골절의 진단과 치료에 64채널 3D VCT와 Conventional 3D CT의 비교 (Comparison of 64 Channel 3 Dimensional Volume CT with Conventional 3D CT in the Diagnosis and Treatment of Facial Bone Fractures)

  • 정종명;김종환;홍인표;최치훈
    • Archives of Plastic Surgery
    • /
    • 제34권5호
    • /
    • pp.605-610
    • /
    • 2007
  • Purpose: Facial trauma is increasing along with increasing popularity in sports, and increasing exposure to crimes or traffic accidents. Compared to the 3D CT of 1990s, the latest CT has made significant improvement thus resulting in higher accuracy of diagnosis. The objective of this study is to compare 64 channel 3 dimensional volume CT(3D VCT) with conventional 3D CT in the diagnosis and treatment of facial bone fractures. Methods: 45 patients with facial trauma were examined by 3D VCT from Jan. 2006 to Feb. 2007. 64 channel 3D VCT which consists of 64 detectors produce axial images of 0.625 mm slice and it scans 175 mm per second. These images are transformed into 3 dimensional image using software Rapidia 2.8. The axial image is reconstructed into 3 dimensional image by volume rendering method. The image is also reconstructed into coronal or sagittal image by multiplanar reformatting method. Results: Contrasting to the previous 3D CT which formulates 3D images by taking axial images of 1-2 mm, 64 channel 3D VCT takes 0.625 mm thin axial images to obtain full images without definite step ladder appearance. 64 channel 3D VCT is effective in diagnosis of thin linear bone fracture, depth and degree of fracture deviation. Conclusion: In its expense and speed, 3D VCT is superior to conventional 3D CT. Owing to its ability to reconstruct full images regardless of the direction using 2 times higher resolution power and 4 times higher speed of the previous 3D CT, 3D VCT allows for accurate evaluation of the exact site and deviation of fine fractures.

콘볼루션 신경망 기반의 안면영상을 이용한 사상체질 분류 (Sasang Constitution Classification using Convolutional Neural Network on Facial Images)

  • 안일구;김상혁;정경식;김호석;이시우
    • 사상체질의학회지
    • /
    • 제34권3호
    • /
    • pp.31-40
    • /
    • 2022
  • Objectives Sasang constitutional medicine is a traditional Korean medicine that classifies humans into four constitutions in consideration of individual differences in physical, psychological, and physiological characteristics. In this paper, we proposed a method to classify Taeeum person (TE) and Non-Taeeum person (NTE), Soeum person (SE) and Non-Soeum person (NSE), and Soyang person (ST) and Non-Soyang person (NSY) using a convolutional neural network with only facial images. Methods Based on the convolutional neural network VGG16 architecture, transfer learning is carried out on the facial images of 3738 subjects to classify TE and NTE, SE and NSE, and SY and NSY. Data augmentation techniques are used to increase classification performance. Results The classification performance of TE and NTE, SE and NSE, and SY and NSY was 77.24%, 85.17%, and 80.18% by F1 score and 80.02%, 85.96%, and 72.76% by Precision-Recall AUC (Area Under the receiver operating characteristic Curve) respectively. Conclusions It was found that Soeum person had the most heterogeneous facial features as it had the best classification performance compared to the rest of the constitution, followed by Taeeum person and Soyang person. The experimental results showed that there is a possibility to classify constitutions only with facial images. The performance is expected to increase with additional data such as BMI or personality questionnaire.

설명가능한 인공지능을 활용한 안면 특징 분석 기반 사상체질 검출 (Sasang Constitution Detection Based on Facial Feature Analysis Using Explainable Artificial Intelligence)

  • 김정균;안일구;이시우
    • 사상체질의학회지
    • /
    • 제36권2호
    • /
    • pp.39-48
    • /
    • 2024
  • Objectives The aim was to develop a method for detecting Sasang constitution based on the ratio of facial landmarks and provide an objective and reliable tool for Sasang constitution classification. Methods Facial images, KS-15 scores, and certainty scores were collected from subjects identified by Korean Medicine Data Center. Facial ratio landmarks were detected, yielding 2279 facial ratio features. Tree-based models were trained to classify Sasang constitution, and Shapley Additive Explanations (SHAP) analysis was employed to identify important facial features. Additionally, Body Mass Index (BMI) and personality questionnaire were incorporated as supplementary information to enhance model performance. Results Using the Tree-based models, the accuracy for classifying Taeeum, Soeum, and Soyang constitutions was 81.90%, 90.49%, and 81.90% respectively. SHAP analysis revealed important facial features, while the inclusion of BMI and personality questionnaire improved model performance. This demonstrates that facial ratio-based Sasang constitution analysis yields effective and accurate classification results. Conclusions Facial ratio-based Sasang constitution analysis provides rapid and objective results compared to traditional methods. This approach holds promise for enhancing personalized medicine in Korean traditional medicine.

컬러 정지 영상에서 색상과 모양 정보를 이용한 얼굴 영역 검출 (Facial Regions Detection Using the Color and Shape Information in Color Still Images)

  • 김영길;한재혁;안재형
    • 한국멀티미디어학회논문지
    • /
    • 제4권1호
    • /
    • pp.67-74
    • /
    • 2001
  • 본 논문에서는 컬러 정지 영상에서 색상과 모양 정보를 이용한 얼굴 영역 검출 알고리즘을 제안한다. 제안된 알고리즘은 YCbCr 색공간에서 Cb와 Cr성분만을 이용하여 조명의 영향을 줄일 수 있다. 피부색 분할을 한후 얼굴 후보 영역들의 잡음 제거와 단순화를 위해 형태학적 필터와 기하학적 교정을 거친다. 입력 영상 내에 여러 사람이 존재할 경우에도 레이블링을 통해 각각의 얼굴 후보 영역들로 분리할 수 있고, 또한 2차 모멘트를 기반으로 한 타원 특징들을 추출하여 기울어진 얼굴 영역들도 성공적으로 검출할 수 있다.

  • PDF