• Title/Summary/Keyword: 표정상태벡터

Search Result 12, Processing Time 0.03 seconds

Automatic facial expression generation system of vector graphic character by simple user interface (간단한 사용자 인터페이스에 의한 벡터 그래픽 캐릭터의 자동 표정 생성 시스템)

  • Park, Tae-Hee;Kim, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1155-1163
    • /
    • 2009
  • This paper proposes an automatic facial expression generation system of vector graphic character using gaussian process model. Proposed method extracts the main feature vectors from twenty-six facial data of character redefined based on Russell's internal emotion state. Also by using new gaussian process model, SGPLVM, we find low-dimensional feature data from extracted high-dimensional feature vectors, and learn probability distribution function (PDF). All parameters of PDF are estimated by maximization the likelihood of learned expression data, and these are used to select wanted facial expressions on two-dimensional space in real time. As a result of simulation, we confirm that proposed facial expression generation tool is working in the small facial expression datasets and can generate various facial expressions without prior knowledge about relation between facial expression and emotion.

  • PDF

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

Facial Expression Recognition using Model-based Feature Extraction in Image Sequence (동영상에서의 모델기반 특징추출을 이용한 얼굴 표정인식)

  • Park Mi-Ae;Choi Sung-In;Im Don-Gak;Ko Je-Pil
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.343-345
    • /
    • 2006
  • 본 논문에서는 ASM(Active Shape Model)과 상태 기반 모델을 사용하여 동영상으로부터 얼굴 표정을 인식하는 방법을 제안한다. ASM을 이용하여 하나의 입력영상에 대한 얼굴요소 특징점들을 정합하고 그 과정에서 생성되는 모양 파라미터 벡터를 추출한다. 동영상에 대해 추출되는 모양 파라미터 벡터 집합을 세 가지상태 중 한 가지를 가지는 상태 벡터로 변환하고 분류기를 통해 얼굴의 표정을 인식한다. 분류단계에서는 분류성능을 높이기 위해 새로운 개체 기반 학습 방법을 제안한다. 실험에서는 새로이 제안한 개체 기반 학습 방법이 KNN 분류기보다 더 좋은 인식률을 나타내는 것을 보인다.

  • PDF

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Facial Expression Recognition with Instance-based Learning Based on Regional-Variation Characteristics Using Models-based Feature Extraction (모델기반 특징추출을 이용한 지역변화 특성에 따른 개체기반 표정인식)

  • Park, Mi-Ae;Ko, Jae-Pil
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1465-1473
    • /
    • 2006
  • In this paper, we present an approach for facial expression recognition using Active Shape Models(ASM) and a state-based model in image sequences. Given an image frame, we use ASM to obtain the shape parameter vector of the model while we locate facial feature points. Then, we can obtain the shape parameter vector set for all the frames of an image sequence. This vector set is converted into a state vector which is one of the three states by the state-based model. In the classification step, we use the k-NN with the proposed similarity measure that is motivated on the observation that the variation-regions of an expression sequence are different from those of other expression sequences. In the experiment with the public database KCFD, we demonstrate that the proposed measure slightly outperforms the binary measure in which the recognition performance of the k-NN with the proposed measure and the existing binary measure show 89.1% and 86.2% respectively when k is 1.

  • PDF

Interactive Facial Expression Animation of Motion Data using CCA (CCA 투영기법을 사용한 모션 데이터의 대화식 얼굴 표정 애니메이션)

  • Kim Sung-Ho
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.85-93
    • /
    • 2005
  • This paper describes how to distribute high multi-dimensional facial expression data of vast quantity over a suitable space and produce facial expression animations by selecting expressions while animator navigates this space in real-time. We have constructed facial spaces by using about 2400 facial expression frames on this paper. These facial spaces are created by calculating of the shortest distance between two random expressions. The distance between two points In the space of expression, which is manifold space, is described approximately as following; When the linear distance of them is shorter than a decided value, if the two expressions are adjacent after defining the expression state vector of facial status using distance matrix expressing distance between two markers, this will be considered as the shortest distance (manifold distance) of the two expressions. Once the distance of those adjacent expressions was decided, We have taken a Floyd algorithm connecting these adjacent distances to yield the shortest distance of the two expressions. We have used CCA(Curvilinear Component Analysis) technique to visualize multi-dimensional spaces, the form of expressing space, into two dimensions. While the animators navigate this two dimensional spaces, they produce a facial animation by using user interface in real-time.

  • PDF

A Mathematical Modeling for the Physical Relationship between Camera and the Unknown Initial State of INS (카메라와 초기상태 정보가 없는 INS간 물리적 관계를 위한 수학적 모델링)

  • Chon Jae-Choon;Shibasaki Ryosuke;Zhao Huijing
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.145-149
    • /
    • 2006
  • 모바일 맹핑시스템에 장착되어 있는 카메라와 INS의 물리적 관계 (두 좌표축 간의 거리(lever-arm) 와 각도(boresight) 캘리브레이션은 카메라 영상의 지리적 정보를 생성하기 위해 필요로 한다. 이 목적을 위해 기존연구는 카메라 캘리브레이션을 통하여 이 물리적 관계를 추정하고 있다. 카메라 캘리브레이션은 3차원 좌표가 할당된 타겟을 이용하여 움직이는 카메라의 렌즈 왜곡과 내/외부 표정 계산하는 것이다. 이 추정에서, 저가의 INS의 초기상태인 자세, 각도는 사용자의 측량에 의해 결정이 된다. 만약 정교한 측량이 없을 경우, 카메라 영상의 지리적 정보는 잘못된 정보를 제공 할 것이다. 정교한 측량의 어려움을 피하기 위해, 본 논문은 카메라와 초기상태 정보가 없는 INS간의 물리적 관계를 위한 수학적 모델을 설계하였다. 시스템에 장착된 카메라와 INS의 초기 기준 좌표계의 관계는 lever-arm과 boresight을 이용한 좌표축 변환으로 정의 될 수 있으며, 이 시스템이 이동 후, 카메라 초기 기준 좌표계에서 INS의 위치는 두 개의 벡터 경로로 정의 될 수 있다. 이 두 벡터 경로는 카메라와 INS의 상대 외부표정의 관계를 이용하여 계산된 벡터들의 조합으로 정의된다. 본 논문은 여러 쌍의 경로로부터 lever-arm과 boresight을 추정하였다.

  • PDF

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

Realtime Facial Expression Control of 3D Avatar by Isomap of Motion Data (모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정 제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.9-16
    • /
    • 2007
  • This paper describe methodology that is distributed on 2-dimensional plane to much high-dimensional facial motion datas using Isomap algorithm, and user interface techniques to control facial expressions by selecting expressions while user navigates this space in real-time. Isomap algorithm is processed of three steps as follow; first define an adjacency expression of each expression data, and second, calculate manifold distance between each expressions and composing expression spaces. These facial spaces are created by calculating of the shortest distance(manifold distance) between two random expressions. We have taken a Floyd algorithm for it. Third, materialize multi-dimensional expression spaces using Multidimensional Scaling, and project two dimensions plane. The smallest adjacency distance to define adjacency expressions uses Pearson Correlation Coefficient. Users can control facial expressions of 3-dimensional avatar by using user interface while they navigates two dimension spaces by real-time.