• Title/Summary/Keyword: 표정 변화

Search Result 251, Processing Time 0.022 seconds

A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model (근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현)

  • Lee, Hyae-Jung;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.932-938
    • /
    • 2012
  • Facial expression has its significance in mutual communication. It is the only means to express human's countless inner feelings better than the diverse languages human use. This paper suggests muscle model-based 3D facial expression generation system to produce easy and natural facial expressions. Based on Waters' muscle model, it adds and used necessary muscles to produce natural facial expressions. Also, among the complex elements to produce expressions, it focuses on core, feature elements of a face such as eyebrows, eyes, nose, mouth, and cheeks and uses facial muscles and muscle vectors to do the grouping of facial muscles connected anatomically. By simplifying and reconstructing AU, the basic nuit of facial expression changes, it generates easy and natural facial expressions.

Study of Facial Expression Recognition using Variable-sized Block (가변 크기 블록(Variable-sized Block)을 이용한 얼굴 표정 인식에 관한 연구)

  • Cho, Youngtak;Ryu, Byungyong;Chae, Oksam
    • Convergence Security Journal
    • /
    • v.19 no.1
    • /
    • pp.67-78
    • /
    • 2019
  • Most existing facial expression recognition methods use a uniform grid method that divides the entire facial image into uniform blocks when describing facial features. The problem of this method may include non-face backgrounds, which interferes with discrimination of facial expressions, and the feature of a face included in each block may vary depending on the position, size, and orientation of the face in the input image. In this paper, we propose a variable-size block method which determines the size and position of a block that best represents meaningful facial expression change. As a part of the effort, we propose the way to determine the optimal number, position and size of each block based on the facial feature points. For the evaluation of the proposed method, we generate the facial feature vectors using LDTP and construct a facial expression recognition system based on SVM. Experimental results show that the proposed method is superior to conventional uniform grid based method. Especially, it shows that the proposed method can adapt to the change of the input environment more effectively by showing relatively better performance than exiting methods in the images with large shape and orientation changes.

3D Facial Expression Creation System Based on Muscle Model (근육모델 기반의 3차원 얼굴표정 생성시스템)

  • 이현철;윤재홍;허기택
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05c
    • /
    • pp.465-468
    • /
    • 2002
  • 최근 컴퓨터를 이용한 시각 분야가 발전하면서 인간과 관계된 연구가 중요시 되어, 사람과 컴퓨터의 인터페이스에 대한 새로운 시도들이 다양하게 이루어지고 있다. 특히 얼굴 형상 모델링과 얼굴 표정변화를 애니메이션 화하는 방법에 대한 연구가 활발히 수행되고 있으며, 그 용도가 매우 다양하고, 적용 범위도 증가하고 있다. 본 논문에서는 한국인의 얼굴특성에 맞는 표준적인 일반모델을 생성하고, 실제 사진과 같이 개개인의 특성에 따라 정확한 형상을 유지할 수 있는 3차원 형상 모델을 제작한다. 그리고 자연스러운 얼굴 표정 생성을 위하여, 근육모델 기반의 얼굴표정 생성 시스템을 개발하여, 자연스럽고 실제감 있는 얼굴애니메이션이 이루어질 수 있도록 하였다.

  • PDF

Facial Expression Control of 3D Avatar using Motion Data (모션 데이터를 이용한 3차원 아바타 얼굴 표정 제어)

  • Kim Sung-Ho;Jung Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.383-390
    • /
    • 2004
  • This paper propose a method that controls facial expression of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. And we setup its system. The space of expression is created from about 2400 frames consist of motion captured data of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. But this space is not such a space where one state can go to another state via the straight trajectory between them. We derive trajectories between two states from the captured set of expressions in an approximate manner. First, two states are regarded adjacent if the distance between their distance matrices is below a given threshold. Any two states are considered to have a trajectory between them If there is a sequence of adjacent states between them. It is assumed . that one states goes to another state via the shortest trajectory between them. The shortest trajectories are found by dynamic programming. The space of facial expressions, as the set of distance matrices, is multidimensional. Facial expression of 3D avatar Is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the multidimensional scaling(MDS). To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. As a result of that, users estimate that system is very useful to control facial expression of 3D avatar in real-time.

The Change of Interior Orientation Parameters in Zoom Lens Digital Cameras (줌렌즈 디지털 카메라의 내부표정요소 변화)

  • Kim, Gi-Hong;Jeong, Soo;Kim, Baek-Seok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.1
    • /
    • pp.93-98
    • /
    • 2010
  • Recently, as digital photogrammetry bas been widely used in various fields including construction, it is also being applied to several industries. It is essential for interior orientation to determine accurate focal length of camera, lens distortion, location of principal point in order to apply high quality digital camera to digital photogrammetry. In this study we conducted interior orientation for zoom lens camera with regular time and zoom factors and analyzed change of radial distortion parameters and location of principal point to evaluate interior orientation stability. As a result, radial distortion parameters($k_1,k_2$) are converged into zero by increasing zoom factors. There are correlation between the change of location of point and zoom factors. The displacement of $x_p$, $y_p$ increase as zoom factors rise high.

The Effect of Cognitive Movement Therapy on Emotional Rehabilitation for Children with Affective and Behavioral Disorder Using Emotional Expression and Facial Image Analysis (감정표현 표정의 영상분석에 의한 인지동작치료가 정서·행동장애아 감성재활에 미치는 영향)

  • Byun, In-Kyung;Lee, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.12
    • /
    • pp.327-345
    • /
    • 2016
  • The purpose of this study was to carry out cognitive movement therapy program for children with affective and behavioral disorder based on neuro science, psychology, motor learning, muscle physiology, biomechanics, human motion analysis, movement control and to quantify characteristic of expression and gestures according to change of facial expression by emotional change. We could observe problematic expression of children with affective disorder, and could estimate the efficiency of application of movement therapy program by the face expression change of children with affective disorder. And it could be expected to accumulate data for early detection and therapy process of development disorder applying converged measurement and analytic method for human development by quantification of emotion and behavior therapy analysis, kinematic analysis. Therefore, the result of this study could be extendedly applied to the disabled, the elderly and the sick as well as children.

Robust Face Feature Extraction for various Pose and Expression (자세와 표정변화에 강인한 얼굴 특징 검출)

  • Jung Jae-Yoon;Jung Jin-Kwon;Cho Sung-Won;Kim Jae-Min
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.294-298
    • /
    • 2005
  • 바이오메트릭스의 여러 가지 기술 중에서 얼굴인식은 지문인식, 손금인식, 홍채인식 등과는 달리 신체의 일부를 접촉시키지 않고도 원거리에 설치된 카메라를 통해 사람을 확인할 수 있는 장점을 가지고 있다. 그러나 얼굴인식은 조명변화, 표정변화 둥의 다양한 환경변화에 대단히 민감하게 반응하므로 얼굴의 특징 영역에 대한 정확한 추출이 반드시 선행되어야 한다. 얼굴의 주요 특징인 눈, 코, 입, 눈썹은 자세와 표정 그리고 생김새에 따라 다양한 위치, 크기, 형태를 가질 수 있다. 본 연구에서는 변화하는 특징 영역과 특징 점을 정확히 추출하기 위하여 얼굴을 9가지 방향으로 분류하고, 각 분류된 방향에서 특징 영역을 통계적인 형태에 따라 다시 2차로 분류하여, 각각의 형태에 대한 표준 템플릿을 생성하여 검출하는 방법을 제안한다.

  • PDF

Synthesis of Realistic Facial Expression using a Nonlinear Model for Skin Color Change (비선형 피부색 변화 모델을 이용한 실감적인 표정 합성)

  • Lee Jeong-Ho;Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.67-75
    • /
    • 2006
  • Facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance. Since it is difficult to generate realistic facial expressions by using only geometric deformations, detailed features such as textures should also be deformed to achieve more realistic expression. The existing methods such as the expression ratio image have drawbacks, in that detailed changes of complexion by lighting can not be generated properly. In this paper, we propose a nonlinear model for skin color change and a model-based synthesis method for facial expression that can apply realistic expression details under different lighting conditions. The proposed method is composed of the following three steps; automatic extraction of facial features using active appearance model and geometric deformation of expression using warping, generation of facial expression using a model for nonlinear skin color change, and synthesis of original face with generated expression using a blending ratio that is computed by the Euclidean distance transform. Experimental results show that the proposed method generate realistic facial expressions under various lighting conditions.

Detection of Face-element for Facial Analysis (표정분석을 위한 얼굴 구성 요소 검출)

  • 이철희;문성룡
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.131-136
    • /
    • 2004
  • According to development of media, various information is recorded in media, expression is one during interesting information. Because expression includes of relationship of human inside. Intention of inside is expressed by gesture, but expression has more information. And, expression can manufacture voluntarily, include plan of inside on the man. Also, expression has unique character in a person, have alliance that do division possibility. In this paper, to analyze expression of USB camera animation, wish to detect facial building block. Because characteristic point by person's expression change exists on face component. For component detection, in animation one frame with Capture, grasp facial position, and separate face area, and detect characteristic points of face component.

Facial Features Extraction for Recognition System of Facial Expression (표정인식 시스템을 위한 얼굴 특징 영역 추출)

  • Kim, Sang-Jun;Lee, Sung-Oh;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2564-2566
    • /
    • 2003
  • 표정인식은 컴퓨터 비전 분야에서 중요한 부분을 차지하고 있으며, 현재 꾸준히 연구가 진행되고 있다. 표정인식 시스템은 크게 얼굴 영역 추출과 표정인식 부분으로 나눌 수 있으며, 얼굴 영역 추출은 전체 인식 시스템의 성능에 큰 영향을 미친다. 특히 표정인식 시스템은 일반 얼굴인식 시스템과 다르게 부분적으로나 전체적으로 형태의 변화가 큰 얼굴에 대해서 정확한 얼굴 영역이 확보되지 않으면 높은 인식성능을 기대하기 어렵다. 따라서 표정인식 시스템은 얼굴 영역 추출이 비중한 부분을 차지하고 있다. 본 논문에서는 영상에서 실시간으로 얼굴 영역을 찾아내고, 그 영역에서 얼굴의 특징점인 눈과 입의 위치를 검출하고, 이를 바탕으로 얼굴의 정확한 영역을 확정하는 일련의 과정을 서술한다.

  • PDF