• Title/Summary/Keyword: Facial Image

Search Result 833, Processing Time 0.027 seconds

얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션 (3D Facial Synthesis and Animation for Facial Motion Estimation)

  • 박도영;심연숙;변혜란
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제27권6호
    • /
    • pp.618-631
    • /
    • 2000
  • 본 논문에서는 2차원 얼굴 영상의 움직임을 추출하여 3차원 얼굴 모델에 합성하는 방법을 연구하였다. 본 논문은 동영상에서의 움직임을 추정하기 위하여 광류를 기반으로 한 추정방법을 이용하였다. 2차원 동영상에서 얼굴요소 및 얼굴의 움직임을 추정하기 위해 인접한 두 영상으로부터 계산된 광류를 가장 잘 고려하는 매개변수화된 움직임 벡터들을 추출한다. 그리고 나서, 이를 소수의 매개변수들의 조합으로 만들어 얼굴의 움직임에 대한 정보를 묘사할 수 있게 하였다. 매개변수화 된 움직임 벡터는 눈 영역, 입술과 눈썹 영역, 그리고 얼굴영역을 위한 서로 다른 세 종류의 움직임을 위하여 사용하였다. 이를 얼굴 모델의 움직임을 합성할 수 있는 단위행위(Action Unit)와 결합하여 2차원 동영상에서의 얼굴 움직임을 3 차원으로 합성한 결과를 얻을 수 있다.

  • PDF

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • 제9권1호
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

CCD 컬러 영상과 적외선 영상을 이용한 얼굴 영역 검출 (Facial Region Tracking by Infra-red and CCD Color Image)

  • 윤태호;김경섭;한명희;신승원;김인영
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.60-62
    • /
    • 2005
  • In this study, the automatic tracking algorithm tracing a human face is proposed by using YCbCr color coordinated information and its thermal properties expressed in terms of thermal indexes in an infra-red image. The facial candidates are separately estimated in CbCr color and infra-red domain, respectively with applying the morphological image processing operations and the geometrical shape measures for fitting the elliptical features of a human face. The identification of a true face is accomplished by logical 'AND' operation between the refined image in CbCr color and infra-red domain.

  • PDF

스테레오 영상을 이용한 3차원 포즈 추정 (3D Head Pose Estimation Using The Stereo Image)

  • 양욱일;송환종;이용욱;손광훈
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.1887-1890
    • /
    • 2003
  • This paper presents a three-dimensional (3D) head pose estimation algorithm using the stereo image. Given a pair of stereo image, we automatically extract several important facial feature points using the disparity map, the gabor filter and the canny edge detector. To detect the facial feature region , we propose a region dividing method using the disparity map. On the indoor head & shoulder stereo image, a face region has a larger disparity than a background. So we separate a face region from a background by a divergence of disparity. To estimate 3D head pose, we propose a 2D-3D Error Compensated-SVD (EC-SVD) algorithm. We estimate the 3D coordinates of the facial features using the correspondence of a stereo image. We can estimate the head pose of an input image using Error Compensated-SVD (EC-SVD) method. Experimental results show that the proposed method is capable of estimating pose accurately.

  • PDF

20세기 후반 한국 여성 스타의 얼굴 이미지와 패션을 통해 본 이상적 여성미의 변천 (Chronological Changes of Women's Ideal Beauty through Facial Image and Fashion of Korean Actress in the Late Twentieth Century)

  • 백경진;한소원;김영인
    • 복식
    • /
    • 제62권5호
    • /
    • pp.44-58
    • /
    • 2012
  • The purpose of this research is to contemplate chronological changes of Korean actress facial image and fashion from 1960s to 1990s and to identify Korean women's ideal beauty reflected through the times. Adjectives describing representative actresses of each studied decade were collected from major newspapers and magazines. Korean women's ideal beauty was divided into 4 sub-types such as youthful, pure, sophisticate, and sexy images. As a result of analyzing actress facial image and fashion, youthful and pure beauties were found consistently over the studied periods. Representative characteristics of sophisticate and sexy beauties have been changed over the studied periods which were influenced by socio-cultural environment factors. The result of this research can provide meaningful sources for historical drama, celebrity marketing strategy planning, and personal image consulting.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권2호
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발 (Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm)

  • 고광은;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제21권3호
    • /
    • pp.395-400
    • /
    • 2011
  • 본 논문에서는 얼굴 표정에서 나타나는 동적인 정서상태 변화를 고려한 얼굴 영상 기반 정서 인식 연구를 제안한다. 본 연구는 얼굴 영상 기반 정서적 특징 검출 및 분석 단계와 정서 상태 분류/인식 단계로 구분할 수 있다. 세부 연구의 구성 중 첫 번째는 Facial Action Units (FAUs)과 결합한 Active Shape Model (ASM)을 이용하여 정서 특징 영역 검출 및 분석기법의 제안이며, 두 번째는 시간에 따른 정서 상태의 동적 변화를 고려한 정확한 인식을 위하여 Hidden Markov Model(HMM) 형태의 Dynamic Bayesian Network를 사용한 정서 상태 분류 및 인식기법의 제안이다. 또한, 최적의 정서적 상태 분류를 위한 HMM의 파라미터 학습 시 Harmony Search (HS) 알고리즘을 이용한 휴리스틱 최적화 과정을 적용하였으며, 이를 통하여 동적 얼굴 영상 변화를 기반으로 하는 정서 상태 인식 시스템을 구성하고 그 성능의 향상을 도모하였다.

감정 트레이닝: 얼굴 표정과 감정 인식 분석을 이용한 이미지 색상 변환 (Emotion Training: Image Color Transfer with Facial Expression and Emotion Recognition)

  • 김종현
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제24권4호
    • /
    • pp.1-9
    • /
    • 2018
  • 본 논문은 얼굴의 표정 변화를 통해 감정을 분석하는 방법으로 조현병의 초기 증상을 스스로 인지할 수 있는 감정 트레이닝 프레임워크를 제안한다. 먼저, Microsoft의 Emotion API를 이용하여 캡처된 얼굴 표정의 사진으로부터 감정값을 얻고, 피크 분석 기반 표준편차로 시간에 따라 변화하는 얼굴 표정의 미묘한 차이를 인식해 감정 상태를 각각 분류한다. 그리하여 Ekman이 제안한 여섯 가지 기본 감정 상태에 반하는 감정들의 정서 및 표현능력이 결핍된 부분에 대해 분석하고, 그 값을 이미지 색상 변환 프레임워크에 통합시켜 사용자 스스로 감정의 변화를 쉽게 인지하고 트레이닝 할 수 있도록 하는 것이 최종목적이다.

해부학 기반의 3차원 얼굴 모델링을 이용한 얼굴 표정 애니메이션 (facial Expression Animation Using 3D Face Modelling of Anatomy Base)

  • 김형균;오무송
    • 한국정보통신학회논문지
    • /
    • 제7권2호
    • /
    • pp.328-333
    • /
    • 2003
  • 본 논문에서는 얼굴의 표정 변화에 영향을 주는 해부학에 기반한 18개의 근육군쌍을 바탕으로 하여 얼굴 표정 애니메이션을 위한 근육의 움직임을 조합할 수 있도록 하였다. 개인의 이미지에 맞춰 메쉬를 변형하여 표준 모델을 만든 다음, 사실감을 높이기 위해 개인 얼굴의 정면과 측면 2장의 이미지를 이용하여 메쉬에 매핑하였다. 얼굴의 표정 생성을 애니메이션 할 수 있는 원동력이 되는 근육 모델은 Waters의 근육모델을 수정하여 사용하였다. 이러한 방법을 사용하여 텍스처가 입혀진 변형된 얼굴을 생성하였다. 또한, Ekman이 제안한 6가지 얼굴 표정을 애니메이션 하였다.

Linear accuracy of cone-beam computed tomography and a 3-dimensional facial scanning system: An anthropomorphic phantom study

  • Oh, Song Hee;Kang, Ju Hee;Seo, Yu-Kyeong;Lee, Sae Rom;Choi, Hwa-Young;Choi, Yong-Suk;Hwang, Eui-Hwan
    • Imaging Science in Dentistry
    • /
    • 제48권2호
    • /
    • pp.111-119
    • /
    • 2018
  • Purpose: This study was conducted to evaluate the accuracy of linear measurements of 3-dimensional (3D) images generated by cone-beam computed tomography (CBCT) and facial scanning systems, and to assess the effect of scanning parameters, such as CBCT exposure settings, on image quality. Materials and Methods: CBCT and facial scanning images of an anthropomorphic phantom showing 13 soft-tissue anatomical landmarks were used in the study. The distances between the anatomical landmarks on the phantom were measured to obtain a reference for evaluating the accuracy of the 3D facial soft-tissue images. The distances between the 3D image landmarks were measured using a 3D distance measurement tool. The effect of scanning parameters on CBCT image quality was evaluated by visually comparing images acquired under different exposure conditions, but at a constant threshold. Results: Comparison of the repeated direct phantom and image-based measurements revealed good reproducibility. There were no significant differences between the direct phantom and image-based measurements of the CBCT surface volume-rendered images. Five of the 15 measurements of the 3D facial scans were found to be significantly different from their corresponding direct phantom measurements(P<.05). The quality of the CBCT surface volume-rendered images acquired at a constant threshold varied across different exposure conditions. Conclusion: These results proved that existing 3D imaging techniques were satisfactorily accurate for clinical applications, and that optimizing the variables that affected image quality, such as the exposure parameters, was critical for image acquisition.