• Title/Summary/Keyword: 표정점

Search Result 303, Processing Time 0.026 seconds

A Study on Fault Location Estimation Technique Using the distribution Ratio of Catenary Current in AC Feeding System (전차선 전류 분류비를 이용한 교류전기철도 고장점 표정기법에 관한 연구)

  • Jung, Ho-Sung;Park, Young;Kim, Hyeng-Chul;Min, Myung-Hwan;Shin, Myong-Chul
    • Journal of the Korean Society for Railway
    • /
    • v.14 no.5
    • /
    • pp.404-410
    • /
    • 2011
  • In AC feeding system, the fault location is calculated by using ratio of current absorbed in the neutral point of AT(Automatic Transformer) or by measuring reactance. In this way, however, an estimation error can be happened due to the many reasons. In addition, for measuring currents in the neutral point of AT, other measuring devices and communication equipments are additionally required. In order to solve the disadvantages, this paper suggests a novel technique using the distribution ratio of catenary current. The proposed technique uses existing protective relays and measures catenary current. With the measured data, we can calculate the distribution ratio of catenary current and determine fault location. Through the simulated results, we derived the correlation between current ratio and fault location. Using this technique, additional equipments and expenses can be reduced. Besides, fault location can be determined more correctly.

Facial Characteristic Point Extraction for Representation of Facial Expression (얼굴 표정 표현을 위한 얼굴 특징점 추출)

  • Oh, Jeong-Su;Kim, Jin-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.117-122
    • /
    • 2005
  • This paper proposes an algorithm for Facial Characteristic Point(FCP) extraction. The FCP plays an important role in expression representation for face animation, avatar mimic or facial expression recognition. Conventional algorithms extract the FCP with an expensive motion capture device or by using markers, which give an inconvenience or a psychological load to experimental person. However, the proposed algorithm solves the problems by using only image processing. For the efficient FCP extraction, we analyze and improve the conventional algorithms detecting facial components, which are basis of the FCP extraction.

Reconstruction from Feature Points of Face through Fuzzy C-Means Clustering Algorithm with Gabor Wavelets (FCM 군집화 알고리즘에 의한 얼굴의 특징점에서 Gabor 웨이브렛을 이용한 복원)

  • 신영숙;이수용;이일병;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.53-58
    • /
    • 2000
  • This paper reconstructs local region of a facial expression image from extracted feature points of facial expression image using FCM(Fuzzy C-Meang) clustering algorithm with Gabor wavelets. The feature extraction in a face is two steps. In the first step, we accomplish the edge extraction of main components of face using average value of 2-D Gabor wavelets coefficient histogram of image and in the next step, extract final feature points from the extracted edge information using FCM clustering algorithm. This study presents that the principal components of facial expression images can be reconstructed with only a few feature points extracted from FCM clustering algorithm. It can also be applied to objects recognition as well as facial expressions recognition.

  • PDF

Estimation and Generation of Facial Expression Using Deep Learning for Art Robot (딥러닝을 활용한 예술로봇의 관객 감정 파악과 공감적 표정 생성)

  • Roh, Jinah
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2019.05a
    • /
    • pp.183-184
    • /
    • 2019
  • 본 논문에서는 로봇과 사람의 자연스러운 감정 소통을 위한 비디오 시퀀스 표정생성 대화 시스템을 제안한다. 제안된 시스템에서는 실시간 비디오 데이터로 판단된 관객의 감정 상태를 반영한 대답을 하며, 딥러닝(Deep Learning)을 활용하여 대화의 맥락에 맞는 로봇의 표정을 실시간 생성한다. 본 논문에서 관객의 표정을 위해 3만여개의 비디오 데이터로 학습한 결과 88%의 학습 정확도로 표정 생성이 가능한 것으로 확인되었다. 본 연구는 로봇 표정 생성에 딥러닝 방식을 적용한 것에 그 의의가 있으며 향후 대화 시스템 자체에도 딥러닝 방식을 확대 적용하기 위한 초석이 될 수 있다는 점에 의의가 있다.

  • PDF

Comparison of Match Candidate Pair Constitution Methods for UAV Images Without Orientation Parameters (표정요소 없는 다중 UAV영상의 대응점 추출 후보군 구성방법 비교)

  • Jung, Jongwon;Kim, Taejung;Kim, Jaein;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.647-656
    • /
    • 2016
  • Growth of UAV technology leads to expansion of UAV image applications. Many UAV image-based applications use a method called incremental bundle adjustment. However, incremental bundle adjustment produces large computation overhead because it attempts feature matching from all image pairs. For efficient feature matching process we have to confine matching only for overlapping pairs using exterior orientation parameters. When exterior orientation parameters are not available, we cannot determine overlapping pairs. We need another methods for feature matching candidate constitution. In this paper we compare matching candidate constitution methods without exterior orientation parameters, including partial feature matching, Bag-of-keypoints, image intensity method. We use the overlapping pair determination method based on exterior orientation parameter as reference. Experiment results showed the partial feature matching method in the one with best efficiency.

Analysis of Perceptual Hierarchy for Facial Feature Point (얼굴 특징점의 지각적 위계구조 분석)

  • 반세범;정찬섭
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.11a
    • /
    • pp.189-193
    • /
    • 2000
  • 표정인식 시스템을 구현하기 위해서는 어떠한 얼굴 특징점이 특정한 내적상태와 밀접한 관련이 있는가를 알아야한다. 이를 위해 MPEG-4 FDP 중 39개의 얼굴 특징점을 사용하여 쾌-불쾌 및 각성-수면의 내적상태와 얼굴 특징요소간의 상관관계를 분석하였다. 연극배우들의 다양한 표정연기 사진 150장으로부터, 5개의 필터 크기와 8개의 필터 방위로 구성된 Gator wavelet을 사용하여 39개의 특징점을 중심으로 영상처리 하였다. 이들 특징점의 필터 반응 값과 내적상태의 상관관계를 분석한 결과, 내적상태의 쾌-불쾌 차원은 주로 입과 눈썹 주변의 특징점과 밀접한 관련이 있었고, 각성-수면 차원은 주로 눈 주변의 특징점과 밀접한 관련이 있었다. 필터의 크기는 주로 저역 공간빈도 필터가 내적상태와 관련이 있었고, 필터의 방위는 주로 비스듬한 사선 방위가 내적상태와 관련이 있었다.

  • PDF

Analysis of facial expressions using three-dimensional motion capture (3차원동작측정에 의한 얼굴 표정의 분석)

  • 박재희;이경태;김봉옥;조강희
    • Proceedings of the ESK Conference
    • /
    • 1996.10a
    • /
    • pp.59-65
    • /
    • 1996
  • 인간의 얼굴 표정은 인간의 감성이 가장 잘 나타나는 부분이다 . 따라서 전통적으로 인간의 표정을 감 성과 연관 지어 연구하려는 많은 노력이 있어 왔다. 최근에는 얼굴 온도 변화를 측정하는 방법, 근전도(EMG; Electromyography)로 얼굴 근육의 움직임을 측정하는 방법, 이미지나 동작분석에 의한 얼굴 표정의 연구가 가능 하게 되었다. 본 연구에서는 인간의 얼굴 표정 변화를 3차원 동작분석 장비를 이용하여 측정하였다. 얼굴 표정 의 측정을 위해 두가지의 실험을 계획하였는데, 첫번 째 실험에서는 피실험자들로 하여금 웃는 표정, 놀라는 표정, 화난 표정, 그리고 무표정 등을 짓게 한 후 이를 측정하였으며, 두번째 실험에스는 코미디 영화와 공포 영화를 피 실험자들에게 보여 주어 피실험자들의 표정 변화를 측정하였다. 5명의 성인 남자가 실험에 참여하였는데, 감성을 일으킬 수 있는 적절한 자극 제시를 못한 점 등에서 처음에 기도했던 6개의 기본 표정(웃음, 슬픔, 혐오, 공포, 화남, 놀람)에 대한 모든 실험과 분석이 수행되지 못했다. 나머지 부분을 포함한 정교한 실험 준비가 추후 연구 에서 요구된다. 이러한 연구는 앞으로 감성공학, 소비자 반응 측정, 컴퓨터 애니메이션(animation), 정보 표시 장치(display) 수단으로서 사용될 수 있을 것이다.

  • PDF

ASM based The Lip Line Dectection System for The Smile Expression Recognition (웃음 표정 인식을 위한 ASM 기반 입술 라인 검출 시스템)

  • Hong, Won-Chang;Park, Jin-Woong;He, Guan-Feng;Kang, Sun-Kyung;Jung, Sung-Tae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.444-446
    • /
    • 2011
  • 본 논문은 실시간으로 카메라 영상으로부터 얼굴의 각 특징점을 검출하고, 검출된 특징점을 이용하여 웃음 표정을 인식하는 시스템을 제안한다. 제안된 시스템은 ASM(Active Shape Model)을 이용하여 실시간 검출부에서 얼굴 영상을 획득한 다음 ASM 학습부에서 학습된 결과를 가지고 얼굴의 특징을 찾는다. 얼굴 특징의 영상으로부터 입술 영역을 검출한다. 이렇게 검출된 입술 영역과 얼굴 특징점을 이용하여 사용자의 웃음 표정을 검출하고 인식하는 방법을 사용함으로써 웃음 표정 인식의 정확도를 높힐 수 있음을 알 수 있었다.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

The Effects of Regulatory Focus and Donees' Facial Expression on Intention of Doing a Charitable Deed (기부자의 조절초점과 기부수혜자의 표정제시방식이 기부의도에 미치는영향)

  • Park, Kikyoung;O, Min-Jeong;park, jong chul
    • (The) Korean Journal of Advertising
    • /
    • v.28 no.2
    • /
    • pp.7-25
    • /
    • 2017
  • The previous studies regarding prosocial behavior have been researched based on donors' personal traits and the effects of donees emotions. However, studies in identifying the effects of regulatory focus as motivational traits and the emotions resulting from donees' expression on prosocial behaviors have not been researched as much thoroughly. Specifically, consumers with prevention-focus perceive fit as the goal attainability process by avoiding negative factors. Thus, it is expected that the intentions of doing a charitable deed greater will more increase when the donees look sad than when they look happy. On the other hand, consumers with promotion-focus perceive fit as the consequential benefits of goal attainability when they are in the condition of a positive emotion. As a result, the intention of doing a charitable deed is expected to be increased greater when the donees have happier faces than sad faces. According to the experimental results, consumers with prevention focus more intended to do a charitable deed when the donees' expression was presented with a sad expression by mediating sadness. On the contrary, consumers with promotion focus show higher intention of doing a charitable deed when the donees looked happier by mediating happy feelings. This study has a theoretical meaningfulness in respect to expanding previous research concerning regulatory focus into donation contexts. Furthermore, this study has practical implications by presenting the donation strategies on information presentations of donees.