• 제목/요약/키워드: Facial image

검색결과 834건 처리시간 0.037초

안면 연령 예측을 위한 CNN기반의 히트 맵을 이용한 랜드마크 선정 (Landmark Selection Using CNN-Based Heat Map for Facial Age Prediction)

  • 홍석미;유현
    • 융합정보논문지
    • /
    • 제11권7호
    • /
    • pp.1-6
    • /
    • 2021
  • 본 연구의 목적은 이미지 랜드마크 선정 기법을 기반으로, 인공신경망 안면 영상분석 시스템의 성능을 향상하기 위한 내용이다. 랜드마크 선정을 위하여 안면 이미지 연령을 분류를 위한 CNN 기반의 다층 ResNet 모델의 구성이 필요하며, ResNet 모델에서 입력 노드의 변화에 따른 출력 노드의 변화를 감지하는 히트 맵을 추출한다. 추출된 다수의 히트 맵을 결합하여 연령 구분 예측과 관계된 안면 랜드마크를 구성한다. 이를 통하여, 안면 랜드마크를 통하여 픽셀의 위치별 중요도를 분석할 수 있으며, 가중치가 낮은 픽셀의 제거함으로서 상당량의 입력 데이터 감소가 가능해졌다. 이러한 기법은 인공신경망 시스템의 연산 성능 향상에 기여하게 된다.

YCbCr 컬러 영상 변환을 통한 얼굴 영역 자동 검출 (Facial Region Tracking in YCbCr Color Coordinates)

  • 한명희;김경섭;윤태호;신승원;김인영
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.63-65
    • /
    • 2005
  • In this study, the automatic face tracking algorithm is proposed by using the color and edge information of a color image. To reduce the effects of variations in the illumination conditions, an acquired CCD color image is first transformed into YCbCr color coordinates, and subsequently the morphological image processing operations, and the elliptical geometric measures are applied to extract the refined facial area.

  • PDF

Facial Feature Extraction Based on Private Energy Map in DCT Domain

  • Kim, Ki-Hyun;Chung, Yun-Su;Yoo, Jang-Hee;Ro, Yong-Man
    • ETRI Journal
    • /
    • 제29권2호
    • /
    • pp.243-245
    • /
    • 2007
  • This letter presents a new feature extraction method based on the private energy map (PEM) technique to utilize the energy characteristics of a facial image. Compared with a non-facial image, a facial image shows large energy congestion in special regions of discrete cosine transform (DCT) coefficients. The PEM is generated by energy probability of the DCT coefficients of facial images. In experiments, higher face recognition performance figures of 100% for the ORL database and 98.8% for the ETRI database have been achieved.

  • PDF

A Study on Detecting Glasses in Facial Image

  • Jung, Sung-Gi;Paik, Doo-Won;Choi, Hyung-Il
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권12호
    • /
    • pp.21-28
    • /
    • 2015
  • In this paper, we propose a method of glasses detection in facial image. we develop a detection method of glasses with a weighted sum of the results that detected by facial element detection and glasses frame candidate region. Component of the face detection method detects the glasses, by defining the detection probability of the glasses according to the detection of a face component. Method using the candidate region of the glasses frame detects the glasses, by defining feature of the glasses frame in the candidate region. finally, The results of the combined weight of both methods are obtained. The proposed method in this paper is expected to increase security system's recognition on facial accessories by raising detection performance of glasses or sunglasses for using ATM.

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • 제19권3호
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법 (Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm)

  • 주종태;장인훈;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

형태분석에 의한 특징 추출과 BP알고리즘을 이용한 정면 얼굴 인식 (Full face recognition using the feature extracted gy shape analyzing and the back-propagation algorithm)

  • 최동선;이주신
    • 전자공학회논문지B
    • /
    • 제33B권10호
    • /
    • pp.63-71
    • /
    • 1996
  • This paper proposes a method which analyzes facial shape and extracts positions of eyes regardless of the tilt and the size of input iamge. With the extracted feature parameters of facial element by the method, full human faces are recognized by a neural network which BP algorithm is applied on. Input image is changed into binary codes, and then labelled. Area, circumference, and circular degree of the labelled binary image are obtained by using chain code and defined as feature parameters of face image. We first extract two eyes from the similarity and distance of feature parameter of each facial element, and then input face image is corrected by standardizing on two extracted eyes. After a mask is genrated line historgram is applied to finding the feature points of facial elements. Distances and angles between the feature points are used as parameters to recognize full face. To show the validity learning algorithm. We confirmed that the proposed algorithm shows 100% recognition rate on both learned and non-learned data for 20 persons.

  • PDF

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • 제20권4호
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권4호
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

감정 인식을 위한 얼굴 영상 분석 알고리즘 (Facial Image Analysis Algorithm for Emotion Recognition)

  • 주영훈;정근호;김문환;박진배;이재연;조영조
    • 한국지능시스템학회논문지
    • /
    • 제14권7호
    • /
    • pp.801-806
    • /
    • 2004
  • 감성 인식 기술은 사회의 여러 분야에서 요구되고 있는 필요한 기술이지만, 인식 과정의 어려움으로 인해 풀리지 않는 문제로 낡아있다. 특히 얼굴 영상을 이용한 감정 인식 기술에서 얼굴 영상을 분석하는 기술 개발이 필요하다. 하지만 얼굴분석을 어려움으로 인해 많은 연구가 진행 중이다. 된 논문에서는 감정 인식을 위한 얼굴 영상 분석 알고리즘을 제안한다. 제안된 얼굴 영상 분석 알고리즘은 얼굴 영역 추출 알고리즘과 얼굴 구성 요소 추출 알고리즘으로 구성된다. 얼굴 영역 추출 알고리즘은 다양한 조명 조건에서도 강인하게 얼굴 영역을 추출할 수 있는 퍼지 색상 필터를 사용한 방법을 제안하였다. 또한 얼굴 구성 요소 추출 알고리즘에서는 가상 얼굴 모형을 이용함으로써 보다 정확하고 빠른 얼굴 구성 요소 추출이 가능하게 하였다. 최종적으로 모의실험을 통해 각 알고리즘들의 수행 과정을 살펴보았으며 그 성능을 평가하였다.