• 제목/요약/키워드: Facial Image

검색결과 829건 처리시간 0.069초

안면 연령 예측을 위한 CNN기반의 히트 맵을 이용한 랜드마크 선정 (Landmark Selection Using CNN-Based Heat Map for Facial Age Prediction)

  • 홍석미;유현
    • 융합정보논문지
    • /
    • 제11권7호
    • /
    • pp.1-6
    • /
    • 2021
  • 본 연구의 목적은 이미지 랜드마크 선정 기법을 기반으로, 인공신경망 안면 영상분석 시스템의 성능을 향상하기 위한 내용이다. 랜드마크 선정을 위하여 안면 이미지 연령을 분류를 위한 CNN 기반의 다층 ResNet 모델의 구성이 필요하며, ResNet 모델에서 입력 노드의 변화에 따른 출력 노드의 변화를 감지하는 히트 맵을 추출한다. 추출된 다수의 히트 맵을 결합하여 연령 구분 예측과 관계된 안면 랜드마크를 구성한다. 이를 통하여, 안면 랜드마크를 통하여 픽셀의 위치별 중요도를 분석할 수 있으며, 가중치가 낮은 픽셀의 제거함으로서 상당량의 입력 데이터 감소가 가능해졌다. 이러한 기법은 인공신경망 시스템의 연산 성능 향상에 기여하게 된다.

YCbCr 컬러 영상 변환을 통한 얼굴 영역 자동 검출 (Facial Region Tracking in YCbCr Color Coordinates)

  • 한명희;김경섭;윤태호;신승원;김인영
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.63-65
    • /
    • 2005
  • In this study, the automatic face tracking algorithm is proposed by using the color and edge information of a color image. To reduce the effects of variations in the illumination conditions, an acquired CCD color image is first transformed into YCbCr color coordinates, and subsequently the morphological image processing operations, and the elliptical geometric measures are applied to extract the refined facial area.

  • PDF

Facial Feature Extraction Based on Private Energy Map in DCT Domain

  • Kim, Ki-Hyun;Chung, Yun-Su;Yoo, Jang-Hee;Ro, Yong-Man
    • ETRI Journal
    • /
    • 제29권2호
    • /
    • pp.243-245
    • /
    • 2007
  • This letter presents a new feature extraction method based on the private energy map (PEM) technique to utilize the energy characteristics of a facial image. Compared with a non-facial image, a facial image shows large energy congestion in special regions of discrete cosine transform (DCT) coefficients. The PEM is generated by energy probability of the DCT coefficients of facial images. In experiments, higher face recognition performance figures of 100% for the ORL database and 98.8% for the ETRI database have been achieved.

  • PDF

A Study on Detecting Glasses in Facial Image

  • Jung, Sung-Gi;Paik, Doo-Won;Choi, Hyung-Il
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권12호
    • /
    • pp.21-28
    • /
    • 2015
  • In this paper, we propose a method of glasses detection in facial image. we develop a detection method of glasses with a weighted sum of the results that detected by facial element detection and glasses frame candidate region. Component of the face detection method detects the glasses, by defining the detection probability of the glasses according to the detection of a face component. Method using the candidate region of the glasses frame detects the glasses, by defining feature of the glasses frame in the candidate region. finally, The results of the combined weight of both methods are obtained. The proposed method in this paper is expected to increase security system's recognition on facial accessories by raising detection performance of glasses or sunglasses for using ATM.

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • 제19권3호
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법 (Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm)

  • 주종태;장인훈;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

형태분석에 의한 특징 추출과 BP알고리즘을 이용한 정면 얼굴 인식 (Full face recognition using the feature extracted gy shape analyzing and the back-propagation algorithm)

  • 최동선;이주신
    • 전자공학회논문지B
    • /
    • 제33B권10호
    • /
    • pp.63-71
    • /
    • 1996
  • This paper proposes a method which analyzes facial shape and extracts positions of eyes regardless of the tilt and the size of input iamge. With the extracted feature parameters of facial element by the method, full human faces are recognized by a neural network which BP algorithm is applied on. Input image is changed into binary codes, and then labelled. Area, circumference, and circular degree of the labelled binary image are obtained by using chain code and defined as feature parameters of face image. We first extract two eyes from the similarity and distance of feature parameter of each facial element, and then input face image is corrected by standardizing on two extracted eyes. After a mask is genrated line historgram is applied to finding the feature points of facial elements. Distances and angles between the feature points are used as parameters to recognize full face. To show the validity learning algorithm. We confirmed that the proposed algorithm shows 100% recognition rate on both learned and non-learned data for 20 persons.

  • PDF

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권4호
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

감정 인식을 위한 얼굴 영상 분석 알고리즘 (Facial Image Analysis Algorithm for Emotion Recognition)

  • 주영훈;정근호;김문환;박진배;이재연;조영조
    • 한국지능시스템학회논문지
    • /
    • 제14권7호
    • /
    • pp.801-806
    • /
    • 2004
  • 감성 인식 기술은 사회의 여러 분야에서 요구되고 있는 필요한 기술이지만, 인식 과정의 어려움으로 인해 풀리지 않는 문제로 낡아있다. 특히 얼굴 영상을 이용한 감정 인식 기술에서 얼굴 영상을 분석하는 기술 개발이 필요하다. 하지만 얼굴분석을 어려움으로 인해 많은 연구가 진행 중이다. 된 논문에서는 감정 인식을 위한 얼굴 영상 분석 알고리즘을 제안한다. 제안된 얼굴 영상 분석 알고리즘은 얼굴 영역 추출 알고리즘과 얼굴 구성 요소 추출 알고리즘으로 구성된다. 얼굴 영역 추출 알고리즘은 다양한 조명 조건에서도 강인하게 얼굴 영역을 추출할 수 있는 퍼지 색상 필터를 사용한 방법을 제안하였다. 또한 얼굴 구성 요소 추출 알고리즘에서는 가상 얼굴 모형을 이용함으로써 보다 정확하고 빠른 얼굴 구성 요소 추출이 가능하게 하였다. 최종적으로 모의실험을 통해 각 알고리즘들의 수행 과정을 살펴보았으며 그 성능을 평가하였다.

얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션 (3D Facial Synthesis and Animation for Facial Motion Estimation)

  • 박도영;심연숙;변혜란
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제27권6호
    • /
    • pp.618-631
    • /
    • 2000
  • 본 논문에서는 2차원 얼굴 영상의 움직임을 추출하여 3차원 얼굴 모델에 합성하는 방법을 연구하였다. 본 논문은 동영상에서의 움직임을 추정하기 위하여 광류를 기반으로 한 추정방법을 이용하였다. 2차원 동영상에서 얼굴요소 및 얼굴의 움직임을 추정하기 위해 인접한 두 영상으로부터 계산된 광류를 가장 잘 고려하는 매개변수화된 움직임 벡터들을 추출한다. 그리고 나서, 이를 소수의 매개변수들의 조합으로 만들어 얼굴의 움직임에 대한 정보를 묘사할 수 있게 하였다. 매개변수화 된 움직임 벡터는 눈 영역, 입술과 눈썹 영역, 그리고 얼굴영역을 위한 서로 다른 세 종류의 움직임을 위하여 사용하였다. 이를 얼굴 모델의 움직임을 합성할 수 있는 단위행위(Action Unit)와 결합하여 2차원 동영상에서의 얼굴 움직임을 3 차원으로 합성한 결과를 얻을 수 있다.

  • PDF