• Title/Summary/Keyword: 표정 변화

Search Result 251, Processing Time 0.027 seconds

Recognition of Facial Expressions Using Muscle-eased Feature Models (근육기반의 특징모델을 이용한 얼굴표정인식에 관한 연구)

  • 김동수;남기환;한준희;박호식;차영석;최현수;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.416-419
    • /
    • 1999
  • We Present a technique for recognizing facial expressions from image sequences. The technique uses muscle-based feature models for tracking facial features. Since the feature models are constructed with a small number of parameters and are deformable in the limited range and directions, each search space for a feature can be limited. The technique estimates muscular contractile degrees for classifying six principal facial express expressions. The contractile vectors are obtained from the deformations of facial muscle models. Similarities are defined between those vectors and representative vectors of principal expressions and are used for determining facial expressions.

  • PDF

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

A 3D Face Reconstruction and Tracking Method using the Estimated Depth Information (얼굴 깊이 추정을 이용한 3차원 얼굴 생성 및 추적 방법)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.18B no.1
    • /
    • pp.21-28
    • /
    • 2011
  • A 3D face shape derived from 2D images may be useful in many applications, such as face recognition, face synthesis and human computer interaction. To do this, we develop a fast 3D Active Appearance Model (3D-AAM) method using depth estimation. The training images include specific 3D face poses which are extremely different from one another. The landmark's depth information of landmarks is estimated from the training image sequence by using the approximated Jacobian matrix. It is added at the test phase to deal with the 3D pose variations of the input face. Our experimental results show that the proposed method can efficiently fit the face shape, including the variations of facial expressions and 3D pose variations, better than the typical AAM, and can estimate accurate 3D face shape from images.

A Study on Facial Expression Recognition using Boosted Local Binary Pattern (Boosted 국부 이진 패턴을 적용한 얼굴 표정 인식에 관한 연구)

  • Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1357-1367
    • /
    • 2013
  • Recently, as one of images based methods in facial expression recognition, the research which used ULBP block histogram feature and SVM classifier was performed. Due to the properties of LBP introduced by Ojala, such as highly distinction capability, durability to the illumination changes and simple operation, LBP is widely used in the field of image recognition. In this paper, we combined $LBP_{8,2}$ and $LBP_{8,1}$ to describe micro features in addition to shift, size change in calculating ULBP block histogram. From sub-windows of 660 of $LBP_{8,1}$ and 550 of $LBP_{8,2}$, ULBP histogram feature of 1210 were extracted and weak classifiers of 50 were generated using AdaBoost. By using the combined $LBP_{8,1}$ and $LBP_{8,2}$ hybrid type of ULBP histogram feature and SVM classifier, facial expression recognition rate could be improved and it was confirmed through various experiments. Facial expression recognition rate of 96.3% by hybrid boosted ULBP block histogram showed the superiority of the proposed method.

Study on the Railway Fault Locator Impedance Prediction Method using Field Synchronized Power Measured Data (실측 동기화 데이터를 활용한 교류전기철도의 고장점표정장치 임피던스 예측기법 연구)

  • Jeon, Yong-Joo;Kim, Jae-chul
    • Journal of the Korean Society for Railway
    • /
    • v.20 no.5
    • /
    • pp.595-601
    • /
    • 2017
  • Due to the electrification of railways, fault at the traction line is increasing year by year. So importance of the fault locator is growing higher. Nevertheless at the field traction line, it is difficult to locate accurate fault point due to various conditions. In this paper railway feeding system current loop equation was simplified and generalized though measured data. And substation, train power data were measured under synchronized condition. Finally catenary impedance was predicted through generalized equation. Also simulation model was designed to figure out the effect of load current for train at same location. Train current was changed from min to max range and catenary impedance was compared at same location. Finally, power measurement was performed in the field at train and substation simultaneously and catenary system impedance was predicted and calculated. Through this method catenary impedance can be measured more easily and continuously compared to the past method.

Model based Facial Expression Recognition using New Feature Space (새로운 얼굴 특징공간을 이용한 모델 기반 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.309-316
    • /
    • 2010
  • This paper introduces a new model based method for facial expression recognition that uses facial grid angles as feature space. In order to be able to recognize the six main facial expression, proposed method uses a grid approach and therefore it establishes a new feature space based on the angles that each gird's edge and vertex form. The way taken in the paper is robust against several affine transformations such as translation, rotation, and scaling which in other approaches are considered very harmful in the overall accuracy of a facial expression recognition algorithm. Also, this paper demonstrates the process that the feature space is created using angles and how a selection process of feature subset within this space is applied with Wrapper approach. Selected features are classified by SVM, 3-NN classifier and classification results are validated with two-tier cross validation. Proposed method shows 94% classification result and feature selection algorithm improves results by up to 10% over the full set of feature.

Quantified Lockscreen: Integration of Personalized Facial Expression Detection and Mobile Lockscreen application for Emotion Mining and Quantified Self (Quantified Lockscreen: 감정 마이닝과 자기정량화를 위한 개인화된 표정인식 및 모바일 잠금화면 통합 어플리케이션)

  • Kim, Sung Sil;Park, Junsoo;Woo, Woontack
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1459-1466
    • /
    • 2015
  • Lockscreen is one of the most frequently encountered interfaces by smartphone users. Although users perform unlocking actions every day, there are no benefits in using lockscreens apart from security and authentication purposes. In this paper, we replace the traditional lockscreen with an application that analyzes facial expressions in order to collect facial expression data and provide real-time feedback to users. To evaluate this concept, we have implemented Quantified Lockscreen application, supporting the following contributions of this paper: 1) an unobtrusive interface for collecting facial expression data and evaluating emotional patterns, 2) an improvement in accuracy of facial expression detection through a personalized machine learning process, and 3) an enhancement of the validity of emotion data through bidirectional, multi-channel and multi-input methodology.

A Real-time Interactive Shadow Avatar with Facial Emotions (감정 표현이 가능한 실시간 반응형 그림자 아바타)

  • Lim, Yang-Mi;Lee, Jae-Won;Hong, Euy-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.4
    • /
    • pp.506-515
    • /
    • 2007
  • In this paper, we propose a Real-time Interactive Shadow Avatar(RISA) which can express facial emotions changing as response of user's gestures. The avatar's shape is a virtual Shadow constructed from the real-time sampled picture of user's shape. Several predefined facial animations overlap on the face area of the virtual Shadow, according to the types of hand gestures. We use the background subtraction method to separate the virtual Shadow, and a simplified region-based tracking method is adopted for tracking hand positions and detecting hand gestures. In order to express smooth change of emotions, we use a refined morphing method which uses many more frames in contrast with traditional dynamic emoticons. RISA can be directly applied to the area of interface media arts and we expect the detecting scheme of RISA would be utilized as an alternative media interface for DMB and camera phones which need simple input devices, in the near future.

  • PDF

Realtime Synthesis of Virtual Faces with Facial Expressions and Speech (표정짓고 말하는 가상 얼굴의 실시간 합성)

  • 송경준;이기영;최창석;민병의
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.8
    • /
    • pp.3-11
    • /
    • 1998
  • 본 논문에서는 고품질의 얼굴 동영상과 운율이 첨가된 음성을 통합하여 자연스런 가상얼굴을 실시간으로 합성하는 방법을 제안한다. 이 방법에서는 한글 텍스트를 입력하여, 텍스트에 따라 입모양과 음성을 합성하고, 얼굴 동영상과 음성의 동기를 맞추고 있다. 먼저, 텍스트를 음운 변화한 후, 문장을 분석하고 자모음사이의 지속시간을 부여한다. 자모음과 지 속시간에 따라 입모양을 변화시켜 얼굴 동영상을 생성하고 있다. 이때, 텍스트에 부합한 입 모양 변화뿐만 아니라, 두부의 3차원 동작과 다양한 표정변화를 통하여 자연스런 가상얼굴 을 실시간으로 합성하고 있다. 한편, 음성합성에서는 문장분석 결과에 따라 강세구와 억양구 를 정하고 있다. 강세구와 억양구를 이용하여 생성된 운율모델이 고품질의 음성합성에 필요 한 지속시간, 억양 및 휴지기를 제어한다. 합성단위는 무제한 어휘가 가능한 반음절과 triphone(VCV)의 조합이며, 합성방식은 TD-PSOLA를 사용한다.

  • PDF

DEM Generation by the Matching Line Using Exterior Orientation Parameters of the IKONOS Geo Imagery (IKONOS 위성영상의 외부표정요소로부터 정합선 수립에 의한 DEM 생성)

  • Lee, Hyo-Seong;Ahn, Ki-Weon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.4
    • /
    • pp.367-376
    • /
    • 2006
  • This study determines the optimum polynomial of exterior orientation parameters(EOPs) as a function of line number of linear array scanner. To estimate priori EOPs, meta data of IKONOS scene and ground control points are used. We select a first order polynomial and a constant for position elements modeling and rotation elements modeling. Positioning accuracy of the determined EOPs is compared with that of RPCs bias-corrected by the least squares adjustment. There is almost no difference between accuracies of the two methods. To obtain digital elevation model(DEM), matching line is established by the EOPs. The DEM is compared with DEM generated by ERDAS IMAGINE software, which utilizes the bias-corrected RPCs. Height differences of DEMs by the two methods are ranged within a allowable standard deviation. The produced DEM, therefore, shows accuracy similar to the verified method.