• Title/Summary/Keyword: 표정 요소

Search Result 266, Processing Time 0.028 seconds

Accuracy Analysis of Image Orientation Technique and Direct Georeferencing Technique

  • Bae Sang-Keun;Kim Byung-Guk
    • Spatial Information Research
    • /
    • v.13 no.4 s.35
    • /
    • pp.373-380
    • /
    • 2005
  • Mobile Mapping Systems are effective systems to acquire the position and image data using vehicle equipped with the GPS (Global Positioning System), IMU (Inertial Measurement Unit), and CCD camera. They are used in various fields of road facility management, map update, and etc. In the general photogrammetry such as aerial photogrammetry, GCP (Ground Control Point)s are needed to compute the image exterior orientation elements (the position and attitude of camera). These points are measured by field survey at the time of data acquisition. But it costs much time and money. Moreover, it is not possible to make sufficient GCP as much as we want. However Mobile Mapping Systems are more efficient both in time and money because they can obtain the position and attitude of camera at the time of photographing. That is, Image Orientation Technique must use GCP to compute the image exterior orientation elements, but on the other hand Direct Georeferencing can directly compute the image exterior orientation elements by GPS/INS. In this paper, we analyze about the positional accuracy comparison of ground point using the Image Orientation Technique and Direct Georeferencing Technique.

  • PDF

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

On Parameterizing of Human Expression Using ICA (독립 요소 분석을 이용한 얼굴 표정의 매개변수화)

  • Song, Ji-Hey;Shin, Hyun-Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.1
    • /
    • pp.7-15
    • /
    • 2009
  • In this paper, a novel framework that synthesizes and clones facial expression in parameter spaces is presented. To overcome the difficulties in manipulating face geometry models with high degrees of freedom, many parameterization methods have been introduced. In this paper, a data-driven parameterization method is proposed that represents a variety of expressions with a small set of fundamental independent movements based on the ICA technique. The face deformation due to the parameters is also learned from the data to capture the nonlinearity of facial movements. With this parameterization, one can control the expression of an animated character's face by the parameters. By separating the parameterization and the deformation learning process, we believe that we can adopt this framework for a variety applications including expression synthesis and cloning. The experimental result demonstrates the efficient production of realistic expressions using the proposed method.

  • PDF

Facial Expression Feature Extraction for Expression Recognition (표정 인식을 위한 얼굴의 표정 특징 추출)

  • Kim, Young-Il;Kim, Jung-Hoon;Hong, Seok-Keun;Cho, Seok-Je
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.537-540
    • /
    • 2005
  • 본 논문에서는 사람의 감정, 건강상태, 정신상태등 다양한 정보를 포함하고 있는 웃음, 슬픔, 졸림, 놀람, 윙크, 무표정 등의 표정을 인식하기 위한 표정의 특징이 되는 얼굴의 국부적 요소인 눈과 입을 검출하여 표정의 특징을 추출한다. 표정 특징의 추출을 위한 전체적인 알고리즘 과정으로는 입력영상으로부터 칼라 정보를 이용하여 얼굴 영역을 검출하여 얼굴에서 특징점의 위치 정보를 이용하여 국부적 요소인 특징점 눈과 입을 추출한다. 이러한 특징점 추출 과정에서는 에지, 이진화, 모폴로지, 레이블링 등의 전처리 알고리즘을 적용한다. 레이블 영역의 크기를 이용하여 얼굴에서 눈, 눈썹, 코, 입 등의 1차 특징점을 추출하고 누적 히스토그램 값과 구조적인 위치 관계를 이용하여 2차 특징점 추출 과정을 거쳐 정확한 눈과 입을 추출한다. 표정 변화에 대한 표정의 특징을 정량적으로 측정하기 위해 추출된 특징점 눈과 입의 눈과 입의 크기와 면적, 미간 사이의 거리 그리고 눈에서 입까지의 거리 등 기하학적 정보를 이용하여 6가지 표정에 대한 표정의 특징을 추출한다.

  • PDF

A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model (근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현)

  • Lee, Hyae-Jung;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.932-938
    • /
    • 2012
  • Facial expression has its significance in mutual communication. It is the only means to express human's countless inner feelings better than the diverse languages human use. This paper suggests muscle model-based 3D facial expression generation system to produce easy and natural facial expressions. Based on Waters' muscle model, it adds and used necessary muscles to produce natural facial expressions. Also, among the complex elements to produce expressions, it focuses on core, feature elements of a face such as eyebrows, eyes, nose, mouth, and cheeks and uses facial muscles and muscle vectors to do the grouping of facial muscles connected anatomically. By simplifying and reconstructing AU, the basic nuit of facial expression changes, it generates easy and natural facial expressions.

Experiment on Camera Platform Calibration of a Multi-Looking Camera System using single Non-Metric Camera (비측정용 카메라를 이용한 Multi-Looking 카메라의 플랫폼 캘리브레이션 실험 연구)

  • Lee, Chang-No;Lee, Byoung-Kil;Eo, Yang-Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.4
    • /
    • pp.351-357
    • /
    • 2008
  • An aerial multi-looking camera system equips itself with five separate cameras which enables acquiring one vertical image and four oblique images at the same time. This provides diverse information about the site compared to aerial photographs vertically. The geometric relationship of oblique cameras and a vertical camera can be modelled by 6 exterior orientation parameters. Once the relationship between the vertical camera and each oblique camera is determined, the exterior orientation parameters of the oblique images can be calculated by the exterior orientation parameters of the vertical image. In order to examine the exterior orientation of both a vertical camera and each oblique cameras in the multi-looking camera relatively, calibration targets were installed in a lab and 14 images were taken from three image stations by tilting and rotating a non-metric digital camera. The interior orientation parameters of the camera and the exterior orientation parameters of the images were estimated. The exterior orientation parameters of the oblique image with respect to the vertical image were calculated relatively by the exterior orientation parameters of the images and error propagation of the orientation angles and the position of the projection center was examined.

Exterior Orientation Parameters Determination from Satellite Imagery RPC Camera Model (위성영상 RPC 카메라 모델로부터 외부표정요소 결정)

  • Lee Hyo Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.23 no.1
    • /
    • pp.59-67
    • /
    • 2005
  • This paper proposes method for determining exterior orientation parameters (EOPs) from the RPC mathematical camera model of the satellite image. SPOT satellite stereo pair is pre-tested using the proposed method. As results that, geopositioning errors are similar with those of the original EOPs. Differences between EOPs determined from the RPC and original EOPs were small. IKONOS Geo-level stereo pair is tested by the proposed method. Results of this method are compared with those of the RPC block adjustment method which have been verified in reported studies. Consequently, the proposed method showed accuracy similar to the RPC block adjustment method. The digital elevation models (DEMs) of sample area acquired by the two method almost did not have a difference.

얼굴 표정 인식 기술

  • Heo, Gyeong-Mu;Gang, Su-Min
    • ICROS
    • /
    • v.20 no.2
    • /
    • pp.39-45
    • /
    • 2014
  • 얼굴 표정 인식은 인간 중심의 human-machine 인터페이스의 가장 중요한 요소 중 하나이다. 현재의 얼굴 표정 인식 기술은 주로 얼굴 영상을 이용하여 특징을 추출하고 이를 미리 학습시킨 인식 모델을 통하여 각 감정의 범주로 분류한다. 본 논문에서는 이러한 얼굴 표정 인식 기술에 사용되는 표정 특징 추출 기법과 표정 분류 기법을 설명하고, 각 기법에서 많이 사용되고 있는 방법들을 간략히 정리한다. 또한 각 기법의 특징들을 나열하였다. 또한 실제적 응용을 위해서 고려해야할 사항들에 대하여 제시하였다. 얼굴 표정 인식 기술은 인간 중심의 human-machine 인터페이스를 제공할 뿐만 아니라 로봇 분야에서도 활용 가능할 것으로 전망한다.

Structural Analysis of Facial Expressions Measured by a Standard Mesh Frame (표준형상모형 정합을 통한 얼굴표정 구조 분석)

  • 한재현;심연숙;변혜란;오경자;정찬섭
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.271-276
    • /
    • 1999
  • 자동 표정인식 및 합성 기술과 내적상태별 얼굴표정 프로토타입 작성의 기초 작업으로서 특정 내적상태를 표현하는 얼굴표정의 특징적 구조를 분석하였다. 내적상태의 평정 절차를 거쳐 열 다섯 가지의 내적상태로 명명된 배우 여섯 명에 대한 영상자료 90장을 사용하여 각 표정의 특징적 구조를 발견하고자 하였다. 서로 다른 얼굴들의 표준화 작업과 서로 다른 표정들의 직접 비교 작업에 정확성을 기하기 위하여 각 표정 표본들을 한국인 표준형상모형에 정합하였다. 정합 결과로 얻어진 각 얼굴표정의 특징점에 대해 모형이 규정하고 있는 좌표값들만으로는 표정해석이 불가능하며 중립얼굴로부터의 변화값이 표정해석에 유효하다는 결론을 얻었다. 표정의 특징적 구조는 그 표정이 표현하는 내적상태가 무엇인가에 따라 발견되지 않는 경우도 있었으며 내적상태가 기본정서에 가까울수록 비교적 일관된 형태를 갖는 것으로 나타났다. 내적상태별 특징적 표정을 결정할 수 있는 경우에 표정의 구조는 얼굴표정 요소들 중 일부에 의해서 특징지어짐을 확인하였다.

  • PDF